Search results for: turbulent flows
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1117

Search results for: turbulent flows

217 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System

Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal

Abstract:

The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.

Keywords: microgravity effect, response surface, terminal speed, unmanned system

Procedia PDF Downloads 173
216 Pavement Management for a Metropolitan Area: A Case Study of Montreal

Authors: Luis Amador Jimenez, Md. Shohel Amin

Abstract:

Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.

Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization

Procedia PDF Downloads 460
215 Flow-Control Effectiveness of Convergent Surface Indentations on an Aerofoil at Low Reynolds Numbers

Authors: Neel K. Shah

Abstract:

Passive flow control on aerofoils has largely been achieved through the use of protrusions such as vane-type vortex generators. Consequently, innovative flow-control concepts should be explored in an effort to improve current component performance. Therefore, experimental research has been performed at The University of Manchester to evaluate the flow-control effectiveness of a vortex generator made in the form of a surface indentation. The surface indentation has a trapezoidal planform. A spanwise array of indentations has been applied in a convergent orientation around the maximum-thickness location of the upper surface of a NACA-0015 aerofoil. The aerofoil has been tested in a two-dimensional set-up in a low-speed wind tunnel at an angle of attack (AoA) of 3° and a chord-based Reynolds number (Re) of ~2.7 x 105. The baseline model has been found to suffer from a laminar separation bubble at low AoA. The application of the indentations at 3° AoA has considerably shortened the separation bubble. The indentations achieve this by shedding up-flow pairs of streamwise vortices. Despite the considerable reduction in bubble length, the increase in leading-edge suction due to the shorter bubble is limited by the removal of surface curvature and blockage (increase in surface pressure) caused locally by the convergent indentations. Furthermore, the up-flow region of the vortices, which locally weakens the pressure recovery around the trailing edge of the aerofoil by thickening the boundary layer, also contributes to this limitation. Due to the conflicting effects of the indentations, the changes in the pressure-lift and pressure-drag coefficients, i.e., cl,p and cd,p, are small. Nevertheless, the indentations have improved cl,p and cd,p beyond the uncertainty range, i.e., by ~1.30% and ~0.30%, respectively, at 3° AoA. The wake measurements show that turbulence intensity and Reynolds stresses have considerably increased in the indented case, thus implying that the indentations increase the viscous drag on the model. In summary, the convergent indentations are able to reduce the size of the laminar separation bubble, but conversely, they are not highly effective in reducing cd,p at the tested Reynolds number.

Keywords: aerofoil flow control, laminar separation bubbles, low Reynolds-number flows, surface indentations

Procedia PDF Downloads 226
214 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 109
213 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border

Authors: Fengqing Li, Petra Schneider

Abstract:

Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.

Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict

Procedia PDF Downloads 118
212 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 188
211 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas

Authors: Anand Malik

Abstract:

The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.

Keywords: debris flow, geospatial data, GIS based modeling, flow-R

Procedia PDF Downloads 273
210 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network

Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour

Abstract:

Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.

Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network

Procedia PDF Downloads 169
209 The Anti-Globalization Movement, Brexit, Outsourcing and the Current State of Globalization

Authors: Alexis Naranjo

Abstract:

In the current global stage, a new sense and mix feelings against the globalization has started to take shape thanks to events such as Brexit and the 2016 US election. The perceptions towards the globalization have started to focus in a resistance movement called the 'anti-globalization movement'. This paper examines the current global stage vs. leadership decisions in a time when market integrations are not longer seeing as an opportunity for an economic growth buster. The biggest economy in the world the United States of America has started to face a new beginning of something called 'anti-globalization', in the current global stage starting with the United Kingdom to the United States a new strategy to help local economies has started to emerge. A new nationalist movement has started to focus on their local economies which now represents a direct threat to the globalization, trade agreements, wages and free markets. Business leaders of multinationals now in our days face a new dilemma, how to address the feeling that globalization and outsourcing destroy and take away jobs from local economies. The initial perception of the literature and data rebels that companies in Western countries like the US sees many risks associate with outsourcing, however, saving cost associated with outsourcing is greater than the firm’s local reputation. Starting with India as a good example of a supplier of IT developers, analysts and call centers we can start saying that India is an industrialized nation which has not yet secured its spot and title. India has emerged as a powerhouse in the outsource industry, which makes India hold the number one spot in the world to outsource IT services. Thanks to the globalization of economies and markets around the globe that new ideas to increase productivity at a lower cost has been existing for years and has started to offer new ideas and options to businesses in different industries. The economic growth of the information technology (IT) industry in India is an example of the power of the globalization which in the case of India has been tremendous and significant especially in the economic arena. This research paper concentrates in understand the behavior of business leaders: First, how multinational’s leaders will face the new challenges and what actions help them to lead in turbulent times. Second, if outsourcing or withdraw from a market is an option what are the consequences and how you communicate and negotiate from the business leader perspective. Finally, is the perception of leaders focusing on financial results or they have a different goal? To answer these questions, this study focuses on the most recent data available to outline and present the findings of the reason why outsourcing is and option and second, how and why those decisions are made. This research also explores the perception of the phenomenon of outsourcing in many ways and explores how the globalization has contributed to its own questioning.

Keywords: anti-globalization, globalization, leadership, outsourcing

Procedia PDF Downloads 194
208 Working Capital Management Practices in Small Businesses in Victoria

Authors: Ranjith Ihalanayake, Lalith Seelanatha, John Breen

Abstract:

In this study, we explored the current working capital management practices as applied in small businesses in Victoria, filling an existing theoretical and empirical gap in literature in general and in Australia in particular. Amidst the current global competitive and dynamic environment, the short term insolvency of small businesses is very critical for the long run survival. A firm’s short-term insolvency is dependent on the availability of sufficient working capital for feeding day to day operational activities. Therefore, given the reliance for short-term funding by small businesses, it has been recognized that the efficient management of working capital is crucial in respect of the prosperity and survival of such firms. Against this background, this research was an attempt to understand the current working capital management strategies and practices used by the small scale businesses. To this end, we conducted an internet survey among 220 small businesses operating in Victoria, Australia. The survey results suggest that the majority of respondents are owner-manager (73%) and male (68%). Respondents participated in this survey mostly have a degree (46%). About a half of respondents are more than 50 years old. Most of respondents (64%) have business management experience more than ten years. Similarly, majority of them (63%) had experience in the area of their current business. Types of business of the respondents are: Private limited company (41%), sole proprietorship (37%), and partnership (15%). In addition, majority of the firms are service companies (63%), followed by retailed companies (25%), and manufacturing (17%). Size of companies of this survey varies, 32% of them have annual sales $100,000 or under, while 22% of them have revenue more than $1,000,000 every year. In regards to the total assets, majority of respondents (43%) have total assets $100,000 or less while 20% of respondents have total assets more than $1,000,000. In regards to WCMPs, results indicate that almost 70% of respondents mentioned that they are responsible for managing their business working capital. The survey shows that majority of respondents (65.5%) use their business experience to identify the level of investment in working capital, compared to 22% of respondents who seek advice from professionals. The other 10% of respondents, however, follow industry practice to identify the level of working capital. The survey also shows that more than a half of respondents maintain good liquidity financial position for their business by having accounts payable less than accounts receivable. This study finds that majority of small business companies in western area of Victoria have a WCM policy but only about 8 % of them have a formal policy. Majority of the businesses (52.7%) have an informal policy while 39.5% have no policy. Of those who have a policy, 44% described their working capital management policies as a compromise policy while 35% described their policy as a conservative policy. Only 6% of respondents apply aggressive policy. Overall the results indicate that the small businesses pay less attention into the management of working capital of their business despite its significance in the successful operation of the business. This approach may be adopted during favourable economic times. However, during relatively turbulent economic conditions, such an approach could lead to greater financial difficulties i.e. short-term financial insolvency.

Keywords: small business, working capital management, Australia, sufficient, financial insolvency

Procedia PDF Downloads 354
207 Continuous-Time Convertible Lease Pricing and Firm Value

Authors: Ons Triki, Fathi Abid

Abstract:

Along with the increase in the use of leasing contracts in corporate finance, multiple studies aim to model the credit risk of the lease in order to cover the losses of the lessor of the asset if the lessee goes bankrupt. In the current research paper, a convertible lease contract is elaborated in a continuous time stochastic universe aiming to ensure the financial stability of the firm and quickly recover the losses of the counterparties to the lease in case of default. This work examines the term structure of the lease rates taking into account the credit default risk and the capital structure of the firm. The interaction between the lessee's capital structure and the equilibrium lease rate has been assessed by applying the competitive lease market argument developed by Grenadier (1996) and the endogenous structural default model set forward by Leland and Toft (1996). The cumulative probability of default was calculated by referring to Leland and Toft (1996) and Yildirim and Huan (2006). Additionally, the link between lessee credit risk and lease rate was addressed so as to explore the impact of convertible lease financing on the term structure of the lease rate, the optimal leverage ratio, the cumulative default probability, and the optimal firm value by applying an endogenous conversion threshold. The numerical analysis is suggestive that the duration structure of lease rates increases with the increase in the degree of the market price of risk. The maximal value of the firm decreases with the effect of the optimal leverage ratio. The results are indicative that the cumulative probability of default increases with the maturity of the lease contract if the volatility of the asset service flows is significant. Introducing the convertible lease contract will increase the optimal value of the firm as a function of asset volatility for a high initial service flow level and a conversion ratio close to 1.

Keywords: convertible lease contract, lease rate, credit-risk, capital structure, default probability

Procedia PDF Downloads 98
206 Effect of Helical Flow on Separation Delay in the Aortic Arch for Different Mechanical Heart Valve Prostheses by Time-Resolved Particle Image Velocimetry

Authors: Qianhui Li, Christoph H. Bruecker

Abstract:

Atherosclerotic plaques are typically found where flow separation and variations of shear stress occur. Although helical flow patterns and flow separations have been recorded in the aorta, their relation has not been clearly clarified and especially in the condition of artificial heart valve prostheses. Therefore, an experimental study is performed to investigate the hemodynamic performance of different mechanical heart valves (MHVs), i.e. the SJM Regent bileaflet mechanical heart valve (BMHV) and the Lapeyre-Triflo FURTIVA trileaflet mechanical heart valve (TMHV), in a transparent model of the human aorta under a physiological pulsatile right-hand helical flow condition. A typical systolic flow profile is applied in the pulse-duplicator to generate a physiological pulsatile flow which thereafter flows past an axial turbine blade structure to imitate the right-hand helical flow induced in the left ventricle. High-speed particle image velocimetry (PIV) measurements are used to map the flow evolution. A circular open orifice nozzle inserted in the valve plane as the reference configuration initially replaces the valve under investigation to understand the hemodynamic effects of the entered helical flow structure on the flow evolution in the aortic arch. Flow field analysis of the open orifice nozzle configuration illuminates the helical flow effectively delays the flow separation at the inner radius wall of the aortic arch. The comparison of the flow evolution for different MHVs shows that the BMHV works like a flow straightener which re-configures the helical flow pattern into three parallel jets (two side-orifice jets and the central orifice jet) while the TMHV preserves the helical flow structure and therefore prevent the flow separation at the inner radius wall of the aortic arch. Therefore the TMHV is of better hemodynamic performance and reduces the pressure loss.

Keywords: flow separation, helical aortic flow, mechanical heart valve, particle image velocimetry

Procedia PDF Downloads 174
205 Architectural Wind Data Maps Using an Array of Wireless Connected Anemometers

Authors: D. Serero, L. Couton, J. D. Parisse, R. Leroy

Abstract:

In urban planning, an increasing number of cities require wind analysis to verify comfort of public spaces and around buildings. These studies are made using computer fluid dynamic simulation (CFD). However, this technique is often based on wind information taken from meteorological stations located at several kilometers of the spot of analysis. The approximated input data on project surroundings produces unprecise results for this type of analysis. They can only be used to get general behavior of wind in a zone but not to evaluate precise wind speed. This paper presents another approach to this problem, based on collecting wind data and generating an urban wind cartography using connected ultrasound anemometers. They are wireless devices that send immediate data on wind to a remote server. Assembled in array, these devices generate geo-localized data on wind such as speed, temperature, pressure and allow us to compare wind behavior on a specific site or building. These Netatmo-type anemometers communicate by wifi with central equipment, which shares data acquired by a wide variety of devices such as wind speed, indoor and outdoor temperature, rainfall, and sunshine. Beside its precision, this method extracts geo-localized data on any type of site that can be feedback looped in the architectural design of a building or a public place. Furthermore, this method allows a precise calibration of a virtual wind tunnel using numerical aeraulic simulations (like STAR CCM + software) and then to develop the complete volumetric model of wind behavior over a roof area or an entire city block. The paper showcases connected ultrasonic anemometers, which were implanted for an 18 months survey on four study sites in the Grand Paris region. This case study focuses on Paris as an urban environment with multiple historical layers whose diversity of typology and buildings allows considering different ways of capturing wind energy. The objective of this approach is to categorize the different types of wind in urban areas. This, particularly the identification of the minimum and maximum wind spectrum, helps define the choice and performance of wind energy capturing devices that could be implanted there. The localization on the roof of a building, the type of wind, the altimetry of the device in relation to the levels of the roofs, the potential nuisances generated. The method allows identifying the characteristics of wind turbines in order to maximize their performance in an urban site with turbulent wind.

Keywords: computer fluid dynamic simulation in urban environment, wind energy harvesting devices, net-zero energy building, urban wind behavior simulation, advanced building skin design methodology

Procedia PDF Downloads 101
204 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 173
203 Rising Levels of Greenhouse Gases: Implication for Global Warming in Anambra State South Eastern Nigeria

Authors: Chikwelu Edward Emenike, Ogbuagu Uchenna Fredrick

Abstract:

About 34% of the solar radiant energy reaching the earth is immediately reflected back to space as incoming radiation by clouds, chemicals, dust in the atmosphere and by the earth’s surface. Most of the remaining 66% warms the atmosphere and land. Most of the incoming solar radiation not reflect away is degraded into low-quality heat and flows into space. The rate at which this energy returns to space as low-quality heat is affected by the presence of molecules of greenhouse gases. Gaseous emission was measured with the aid of Growen gas Analyzer with a digital readout. Total measurements of eight parameters of twelve selected sample locations taken at two different seasons within two months were made. The ambient air quality investigation in Anambra State has shown the overall mean concentrations of gaseous emission at twelve (12) locations. The mean gaseous emissions showed (NO2=0.66ppm, SO2=0.30ppm, CO=43.93ppm, H2S=2.17ppm, CH4=1.27ppm, CFC=1.59ppb, CO2=316.33ppm, N2O=302.67ppb and O3=0.37ppm). These values do not conform to the National Ambient Air Quality Standard (NAAQS) and thus contribute significantly to the global warming. Because some of these gaseous emissions (SO2, NO2) are oxidizing agents, they act as irritants that damage delicate tissues in the eyes and respiratory passages. These can impair lung function and trigger cardiovascular problems as the heart tries to compensate for lack of Oxygen by pumping faster and harder. The major sources of air pollution are transportation, industrial processes, stationary fuel combustion and solid waste disposal, thus much is yet to be done in a developing country like Nigeria. Air pollution control using pollution-control equipment to reduce the major conventional pollutants, relocating people who live very close to dumpsites, processing and treatment of gases to produce electricity, heat, fuel and various chemical components should be encouraged.

Keywords: ambient air, atmosphere, greenhouse gases, anambra state

Procedia PDF Downloads 433
202 Surface Water Flow of Urban Areas and Sustainable Urban Planning

Authors: Sheetal Sharma

Abstract:

Urban planning is associated with land transformation from natural areas to modified and developed ones which leads to modification of natural environment. The basic knowledge of relationship between both should be ascertained before proceeding for the development of natural areas. Changes on land surface due to build up pavements, roads and similar land cover, affect surface water flow. There is a gap between urban planning and basic knowledge of hydrological processes which should be known to the planners. The paper aims to identify these variations in surface flow due to urbanization for a temporal scale of 40 years using Storm Water Management Mode (SWMM) and again correlating these findings with the urban planning guidelines in study area along with geological background to find out the suitable combinations of land cover, soil and guidelines. For the purpose of identifying the changes in surface flows, 19 catchments were identified with different geology and growth in 40 years facing different ground water levels fluctuations. The increasing built up, varying surface runoff are studied using Arc GIS and SWMM modeling, regression analysis for runoff. Resulting runoff for various land covers and soil groups with varying built up conditions were observed. The modeling procedures also included observations for varying precipitation and constant built up in all catchments. All these observations were combined for individual catchment and single regression curve was obtained for runoff. Thus, it was observed that alluvial with suitable land cover was better for infiltration and least generation of runoff but excess built up could not be sustained on alluvial soil. Similarly, basalt had least recharge and most runoff demanding maximum vegetation over it. Sandstone resulted in good recharging if planned with more open spaces and natural soils with intermittent vegetation. Hence, these observations made a keystone base for planners while planning various land uses on different soils. This paper contributes and provides a solution to basic knowledge gap, which urban planners face during development of natural surfaces.

Keywords: runoff, built up, roughness, recharge, temporal changes

Procedia PDF Downloads 278
201 Embolism: How Changes in Xylem Sap Surface Tension Affect the Resistance against Hydraulic Failure

Authors: Adriano Losso, Birgit Dämon, Stefan Mayr

Abstract:

In vascular plants, water flows from roots to leaves in a metastable state, and even a small perturbation of the system can lead a sudden transition from the liquid to the vapor phase, resulting in xylem embolism (cavitation). Xylem embolism, induced by drought stress and/or freezing stress is caused by the aspiration of gaseous bubbles into xylem conduits from adjacent gas-filled compartments through pit membrane pores (‘air seeding’). At water potentials less negative than the threshold for air seeding, the surface tension (γ) stabilizes the air-water interface and thus prevents air from passing the pit pores. This hold is probably also true for conifers, where this effect occurs at the edge of the sealed torus. Accordingly, it was experimentally demonstrated that γ influences air seeding, but information on the relevance of this effect under field conditions is missing. In this study, we analyzed seasonal changes in γ of the xylem sap in two conifers growing at the alpine timberline (Picea abies and Pinus mugo). In addition, cut branches were perfused (40 min perfusion at 0.004 MPa) with different γ solutions (i.e. distilled and degassed water, 2, 5 and 15% (v/v) ethanol-water solution corresponding to a γ of 74, 65, 55 and 45 mN m-1, respectively) and their vulnerability to drought-induced embolism analyzed via the centrifuge technique (Cavitron). In both species, xylem sap γ changed considerably (ca. 53-67 and ca. 50-68 mN m-1 in P. abies and P. cembra, respectively) over the season. Branches perfused with low γ solutions showed reduced resistance against drought-induced embolism in both species. A significant linear relationship (P < 0.001) between P12, P50 and P88 (i.e. water potential at 12, 50 and 88% of the loss of conductivity) and xylem sap γ was found. Based on this correlation, a variation in P50 between -3.10 and -3.83 MPa (P. abies) and between -3.21 and -4.11 MPa (P. mugo) over the season could be estimated. Results demonstrate that changes in γ of the xylem sap can considerably influence a tree´s resistance to drought-induced embolism. They indicate that vulnerability analyses, normally conducted at a γ near that of pure water, might often underestimate vulnerabilities under field conditions. For studied timberline conifers, seasonal changes in γ might be especially relevant in winter, when frost drought and freezing stress can lead to an excessive embolism.

Keywords: conifers, Picea abies, Pinus mugo, timberline

Procedia PDF Downloads 294
200 Intermittent Effect of Coupled Thermal and Acoustic Sources on Combustion: A Spatial Perspective

Authors: Pallavi Gajjar, Vinayak Malhotra

Abstract:

Rockets have been known to have played a predominant role in spacecraft propulsion. The quintessential aspect of combustion-related requirements of a rocket engine is the minimization of the surrounding risks/hazards. Over time, it has become imperative to understand the combustion rate variation in presence of external energy source(s). Rocket propulsion represents a special domain of chemical propulsion assisted by high speed flows in presence of acoustics and thermal source(s). Jet noise leads to a significant loss of resources and every year a huge amount of financial aid is spent to prevent it. External heat source(s) induce high possibility of fire risk/hazards which can sufficiently endanger the operation of a space vehicle. Appreciable work had been done with justifiable simplification and emphasis on the linear variation of external energy source(s), which yields good physical insight but does not cater to accurate predictions. Present work experimentally attempts to understand the correlation between inter-energy conversions with the non-linear placement of external energy source(s). The work is motivated by the need to have better fire safety and enhanced combustion. The specific objectives of the work are a) To interpret the related energy transfer for combustion in presence of alternate external energy source(s) viz., thermal and acoustic, b) To fundamentally understand the role of key controlling parameters viz., separation distance, the number of the source(s), selected configurations and their non-linear variation to resemble real-life cases. An experimental setup was prepared using incense sticks as potential fuel and paraffin wax candles as the external energy source(s). The acoustics was generated using frequency generator, and source(s) were placed at selected locations. Non-equidistant parametric experimentation was carried out, and the effects were noted on regression rate changes. The results are expected to be very helpful in offering a new perspective into futuristic rocket designs and safety.

Keywords: combustion, acoustic energy, external energy sources, regression rate

Procedia PDF Downloads 141
199 The Role of Institutional Quality and Institutional Quality Distance on Trade: The Case of Agricultural Trade within the Southern African Development Community Region

Authors: Kgolagano Mpejane

Abstract:

The study applies a New Institutional Economics (NIE) analytical framework to trade in developing economies by assessing the impacts of institutional quality and institutional quality distance on agricultural trade using a panel data of 15 Southern African Development Community (SADC) countries from the years 1991-2010. The issue of institutions on agricultural trade has not been accorded the necessary attention in the literature, particularly in developing economies. Therefore, the paper empirically tests the gravity model of international trade by measuring the impact of political, economic and legal institutions on intra SADC agricultural trade. The gravity model is noted for its exploratory power and strong theoretical foundation. However, the model has statistical shortcomings in dealing with zero trade values and heteroscedasticity residuals leading to biased results. Therefore, this study employs a two stage Heckman selection model with a Probit equation to estimate the influence of institutions on agricultural trade. The selection stages include the inverse Mills ratio to account for the variable bias of the gravity model. The Heckman model accounts for zero trade values and is robust in the presence of heteroscedasticity. The empirical results of the study support the NIE theory premise that institutions matter in trade. The results demonstrate that institutions determine bilateral agricultural trade on different margins with political institutions having positive and significant influence on bilateral agricultural trade flows within the SADC region. Legal and economic institutions have significant and negative effects on SADC trade. Furthermore, the results of this study confirm that institutional quality distance influences agricultural trade. Legal and political institutional distance have a positive and significant influence on bilateral agricultural trade while the influence of economic, institutional quality is negative and insignificant. The results imply that nontrade barriers, in the form of institutional quality and institutional quality distance, are significant factors limiting intra SADC agricultural trade. Therefore, gains from intra SADC agricultural trade can be attained through the improvement of institutions within the region.

Keywords: agricultural trade, institutions, gravity model, SADC

Procedia PDF Downloads 148
198 The Impact of Human Intervention on Net Primary Productivity for the South-Central Zone of Chile

Authors: Yannay Casas-Ledon, Cinthya A. Andrade, Camila E. Salazar, Mauricio Aguayo

Abstract:

The sustainable management of available natural resources is a crucial question for policy-makers, economists, and the research community. Among several, land constitutes one of the most critical resources, which is being intensively appropriated by human activities producing ecological stresses and reducing ecosystem services. In this context, net primary production (NPP) has been considered as a feasible proxy indicator for estimating the impacts of human interventions on land-uses intensity. Accordingly, the human appropriation of NPP (HANPP) was calculated for the south-central regions of Chile between 2007 and 2014. The HANPP was defined as the difference between the potential NPP of the naturally produced vegetation (NPP0, i.e., the vegetation that would exist without any human interferences) and the NPP remaining in the field after harvest (NPPeco), expressed in gC/m² yr. Other NPP flows taken into account in HANPP estimation were the harvested (NPPh) and the losses of NPP through land conversion (NPPluc). The ArcGIS 10.4 software was used for assessing the spatial and temporal HANPP changes. The differentiation of HANPP as % of NPP0 was estimated by each landcover type taken in 2007 and 2014 as the reference years. The spatial results depicted a negative impact on land use efficiency during 2007 and 2014, showing negative HANPP changes for the whole region. The harvest and biomass losses through land conversion components are the leading causes of loss of land-use efficiency. Furthermore, the study depicted higher HANPP in 2014 than in 2007, representing 50% of NPP0 for all landcover classes concerning 2007. This performance was mainly related to the higher volume of harvested biomass for agriculture. In consequence, the cropland depicted the high HANPP followed by plantation. This performance highlights the strong positive correlation between the economic activities developed into the region. This finding constitutes the base for a better understanding of the main driving force influencing biomass productivity and a powerful metric for supporting the sustainable management of land use.

Keywords: human appropriation, land-use changes, land-use impact, net primary productivity

Procedia PDF Downloads 137
197 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 103
196 Media, Politics and Power in the Representation of the Refugee and Migration Crisis in Europe

Authors: Evangelia-Matroni Tomara

Abstract:

This thesis answers the question whether the media representations and reporting in 2015-2016 - especially, after the image of the drowned three-year-old Syrian boy in the Mediterranean Sea which made global headlines in the beginning of September 2015 -, the European Commission regulatory sources material and related reporting, have the power to challenge the conceptualization of humanitarianism or even redefine it. The theoretical foundations of the thesis are based on humanitarianism and its core definitions, the power of media representations and the relative portrayal of migrants, refugees and/or asylum seekers, as well as the dominant migration discourse and EU migration governance. Using content analysis for the media portrayal of migrants (436 newspaper articles) and qualitative content analysis for the European Commission Communication documents from May 2015 until June 2016 that required various depths of interpretation, this thesis allowed us to revise the concept of humanitarianism, realizing that the current crisis may seem to be a turning point for Europe but is not enough to overcome the past hostile media discourses and suppress the historical perspective of security and control-oriented EU migration policies. In particular, the crisis helped to shift the intensity of hostility and the persistence in the state-centric, border-oriented securitization in Europe into a narration of victimization rather than threat where mercy and charity dynamics are dominated and into operational mechanisms, noting the emergency of immediate management of the massive migrations flows, respectively. Although, the understanding of a rights-based response to the ongoing migration crisis, is being followed discursively in both political and media stage, the nexus described, points out that the binary between ‘us’ and ‘them’ still exists, with only difference that the ‘invaders’ are now ‘pathetic’ but still ‘invaders’. In this context, the migration crisis challenges the concept of humanitarianism because rights dignify migrants as individuals only in a discursive or secondary level while the humanitarian work is mostly related with the geopolitical and economic interests of the ‘savior’ states.

Keywords: European Union politics, humanitarianism, immigration, media representation, policy-making, refugees, security studies

Procedia PDF Downloads 293
195 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes

Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang

Abstract:

The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.

Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations

Procedia PDF Downloads 248
194 Flow Duration Curves and Recession Curves Connection through a Mathematical Link

Authors: Elena Carcano, Mirzi Betasolo

Abstract:

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

Keywords: chronological sequence of discharges, recession curves, streamflow duration curves, water concession

Procedia PDF Downloads 186
193 Comparative Morphometric Analysis of Yelganga-Shivbhadra and Kohilla River Sub-Basins in Aurangabad District Maharashtra India

Authors: Chandrakant Gurav, Md Babar, Ajaykumar Asode

Abstract:

Morphometric analysis is the first stage of any basin analysis. By using these morphometric parameters we give indirect information about the nature and relations of stream with other streams, Geology of the area, groundwater condition and tectonic history of the basin. In the present study, Yelganga, Shivbhadra and Kohilla rivers, tributaries of the Godavari River in Aurangabad district, Maharashtra, India are considered to compare and study their morphometric characters. The linear, areal and relief morphometric aspects of the sub-basins have been assessed and evaluated in GIS environment. For this study, ArcGIS 10.1 software has been used for delineating, digitizing and generating different thematic maps. The Survey of India (SOI) toposheets maps and Shuttle Radar Topography Mission (SRTM) Digital Elevation Model (DEM) on resolution 30 m downloaded from United States Geological Survey (USGS) have been used for preparation of map and data generation. Geologically, the study area is covered by Central Deccan Volcanic Province (CDVP). It mainly consists of ‘aa’ type of basaltic lava flows of Late (upper) Cretaceous to Early (lower) Eocene age. The total geographical area of Yelganga, Shivbhadra and Kohilla river sub-basins are 185.5 sq. km., 142.6 sq. km and 122.3 sq. km. respectively The stream ordering method as suggested by the Strahler has been employed for present study and found that all the sub-basins are of 5th order streams. The average bifurcation ratio value of the sub-basins is below 5, indicates that there appears to be no strong structural control on drainage development, homogeneous nature of lithology and drainage network is in well-developed stage of erosion. The drainage density of Yelganga, Shivbhadra and Kohilla Sub-basins is 1.79 km/km2, 1.48 km/km2 and 1.89 km/km2 respectively and stream frequency is 1.94 streams/km2, 1.19 streams/km2 and 1.68 streams/km2 respectively, indicating semi-permeable sub-surface. Based on textural ratio values it indicates that the sub-basins have coarse texture. Shape parameters such as form factor ratio, circularity ratio and elongation ratio values shows that all three sub- basins are elongated in shape.

Keywords: GIS, Kohilla, morphometry, Shivbhadra, Yelganga

Procedia PDF Downloads 156
192 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia

Authors: Zeinu Ahmed Rabba, Derek D Stretch

Abstract:

Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.

Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase

Procedia PDF Downloads 286
191 Erosion and Deposition of Terrestrial Soil Supplies Nutrients to Estuaries and Coastal Bays: A Flood Simulation Study of Sediment-Nutrient Flux

Authors: Kaitlyn O'Mara, Michele Burford

Abstract:

Estuaries and coastal bays can receive large quantities of sediment from surrounding catchments during flooding or high flow periods. Large river systems that feed freshwater into estuaries can flow through several catchments of varying geology. Human modification of catchments for agriculture, industry and urban use can contaminate soils with excess nutrients, trace metals and other pollutants. Land clearing, especially clearing of riparian vegetation, can accelerate erosion, mobilising, transporting and depositing soil particles into rivers, estuaries and coastal bays. In this study, a flood simulation experiment was used to study the flux of nutrients between soil particles and water during this erosion, transport and deposition process. Granite, sedimentary and basalt surface soils (as well as sub-soils of granite and sedimentary) were collected from eroding areas surrounding the Brisbane River, Australia. The <63 µm size fraction of each soil type was tumbled in freshwater for 3 days, to simulation flood erosion and transport, followed by stationary exposure to seawater for 4 weeks, to simulate deposition into estuaries. Filtered water samples were taken at multiple time points throughout the experiment and analysed for water nutrient concentrations. The highest rates of nutrient release occurred during the first hour of exposure to freshwater and seawater, indicating a chemical reaction with seawater that may act to release some nutrient particles that remain bound to the soil during turbulent freshwater transport. Although released at a slower rate than the first hour, all of the surface soil types showed continual ammonia, nitrite and nitrate release over the 4-week seawater exposure, suggesting that these soils may provide ongoing supply of these nutrients to estuarine waters after deposition. Basalt surface soil released the highest concentrations of phosphates and dissolved organic phosphorus. Basalt soils are found in much of the agricultural land surrounding the Brisbane River and contributed largely to the 2011 Brisbane River flood plume deposit in Moreton Bay, suggesting these soils may be a source of phosphate enrichment in the bay. The results of this study suggest that erosion of catchment soils during storm and flood events may be a source of nutrient supply in receiving waterways, both freshwater and marine, and that the amount of nutrient release following these events may be affected by the type of soil deposited. For example, flooding in different catchments of a river system over time may result in different algal and food web responses in receiving estuaries.

Keywords: flood, nitrogen, nutrient, phosphorus, sediment, soil

Procedia PDF Downloads 186
190 Inviscid Steady Flow Simulation Around a Wing Configuration Using MB_CNS

Authors: Muhammad Umar Kiani, Muhammad Shahbaz, Hassan Akbar

Abstract:

Simulation of a high speed inviscid steady ideal air flow around a 2D/axial-symmetry body was carried out by the use of mb_cns code. mb_cns is a program for the time-integration of the Navier-Stokes equations for two-dimensional compressible flows on a multiple-block structured mesh. The flow geometry may be either planar or axisymmetric and multiply-connected domains can be modeled by patching together several blocks. The main simulation code is accompanied by a set of pre and post-processing programs. The pre-processing programs scriptit and mb_prep start with a short script describing the geometry, initial flow state and boundary conditions and produce a discretized version of the initial flow state. The main flow simulation program (or solver as it is sometimes called) is mb_cns. It takes the files prepared by scriptit and mb_prep, integrates the discrete form of the gas flow equations in time and writes the evolved flow data to a set of output files. This output data may consist of the flow state (over the whole domain) at a number of instants in time. After integration in time, the post-processing programs mb_post and mb_cont can be used to reformat the flow state data and produce GIF or postscript plots of flow quantities such as pressure, temperature and Mach number. The current problem is an example of supersonic inviscid flow. The flow domain for the current problem (strake configuration wing) is discretized by a structured grid and a finite-volume approach is used to discretize the conservation equations. The flow field is recorded as cell-average values at cell centers and explicit time stepping is used to update conserved quantities. MUSCL-type interpolation and one of three flux calculation methods (Riemann solver, AUSMDV flux splitting and the Equilibrium Flux Method, EFM) are used to calculate inviscid fluxes across cell faces.

Keywords: steady flow simulation, processing programs, simulation code, inviscid flux

Procedia PDF Downloads 429
189 A Telecoupling Lens to Study Global Sustainability Entanglements along Supply Chains: The Case of Dutch-Kenyan Rose Trade

Authors: Klara Strecker

Abstract:

During times of globalization, socioeconomic systems have become connected across the world through global supply chains. As a result, consumption and production locations have increasingly become spatially decoupled. This decoupling leads to complex entanglements of systems and sustainability challenges across distances -entanglements which can be conceptualized as telecouplings. Through telecouplings, people and environments across the world have become closely connected, bringing challenges as well as opportunities. Some argue that telecoupling dynamics started taking shape during times of colonization when resources were first traded across the world. An example of such a telecoupling is that of the rose. Every third rose sold in Europe is grown in Kenya and enters the European market through the Dutch flower auction system. Many Kenyan farms are Dutch-owned, closely entangling Kenya and the Netherlands through the trade of roses. Furthermore, the globalization of the flower industry and the resulting shift of production away from the Netherlands and towards Kenya has led to significant changes in the Dutch horticulture sector. However, the sustainability effects of this rose telecoupling is limited neither to the horticulture sector nor to the Netherlands and Kenya. Alongside the flow of roses between these countries come complex financial, knowledge-based, and regulatory flows. The rose telecoupling also creates spillover effects to other countries, such as Ethiopia, and other industries, such as Kenyan tourism. Therefore, telecoupling dynamics create complex entanglements that cut across sectors, environments, communities, and countries, which makes effectively governing and managing telecouplings and their sustainability implications challenging. Indeed, sustainability can no longer be studied in spatial and temporal isolation. This paper aims to map the rose telecoupling’s complex environmental and social interactions to identify points of tension guiding sustainability-targeted interventions. Mapping these interactions will provide a more holistic understanding of the sustainability challenges involved in the Dutch-Kenyan rose trade. This interdisciplinary telecoupling approach reframes and integrates interdisciplinary knowledge about the rose trade between the Netherlands, Kenya, and beyond.

Keywords: Dutch-Kenyan rose trade, globalization, socio-ecological system, sustainability, telecoupling

Procedia PDF Downloads 104
188 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 302