Search results for: source documents
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5502

Search results for: source documents

1722 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations

Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik

Abstract:

The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.

Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor

Procedia PDF Downloads 175
1721 Preparation of Electrospun PLA/ENR Fibers

Authors: Jaqueline G. L. Cosme, Paulo H. S. Picciani, Regina C. R. Nunes

Abstract:

Electrospinning is a technique for the fabrication of nanoscale fibers. The general electrospinning system consists of a syringe filled with polymer solution, a syringe pump, a high voltage source and a grounded counter electrode. During electrospinning a volumetric flow is set by the syringe pump and an electric voltage is applied. This forms an electric potential between the needle and the counter electrode (collector plate), which results in the formation of a Taylor cone and the jet. The jet is moved towards the lower potential, the counter electrode, wherein the solvent of the polymer solution is evaporated and the polymer fiber is formed. On the way to the counter electrode, the fiber is accelerated by the electric field. The bending instabilities that occur form a helical loop movements of the jet, which result from the coulomb repulsion of the surface charge. Trough bending instabilities the jet is stretched, so that the fiber diameter decreases. In this study, a thermoplastic/elastomeric binary blend of non-vulcanized epoxidized natural rubber (ENR) and poly(latic acid) (PLA) was electrospun using polymer solutions consisting of varying proportions of PCL and NR. Specifically, 15% (w/v) PLA/ENR solutions were prepared in /chloroform at proportions of 5, 10, 25, and 50% (w/w). The morphological and thermal properties of the electrospun mats were investigated by scanning electron microscopy (SEM) and differential scanning calorimetry analysis. The SEM images demonstrated the production of micrometer- and sub-micrometer-sized fibers with no bead formation. The blend miscibility was evaluated by thermal analysis, which showed that blending did not improve the thermal stability of the systems.

Keywords: epoxidized natural rubber, poly(latic acid), electrospinning, chemistry

Procedia PDF Downloads 410
1720 Prioritizing the Most Important Information from Contractors’ BIM Handover for Firefighters’ Responsibilities

Authors: Akram Mahdaviparsa, Tamera McCuen, Vahideh Karimimansoob

Abstract:

Fire service is responsible for protecting life, assets, and natural resources from fire and other hazardous incidents. Search and rescue in unfamiliar buildings is a vital part of firefighters’ responsibilities. Providing firefighters with precise building information in an easy-to-understand format is a potential solution for mitigating the negative consequences of fire hazards. The negative effect of insufficient knowledge about a building’s indoor environment impedes firefighters’ capabilities and leads to lost property. A data rich building information modeling (BIM) is a potentially useful source in three-dimensional (3D) visualization and data/information storage for fire emergency response. Therefore, this research’s purpose is prioritizing the required information for firefighters from the most important information to the least important. A survey was carried out with firefighters working in the Norman Fire Department to obtain the importance of each building information item. The results show that “the location of exit doors, windows, corridors, elevators, and stairs”, “material of building elements”, and “building data” are the three most important information specified by firefighters. The results also implied that the 2D model of architectural, structural and way finding is more understandable in comparison with the 3D model, while the 3D model of MEP system could convey more information than the 2D model. Furthermore, color in visualization can help firefighters to understand the building information easier and quicker. Sufficient internal consistency of all responses was proven through developing the Pearson Correlation Matrix and obtaining Cronbach’s alpha of 0.916. Therefore, the results of this study are reliable and could be applied to the population.

Keywords: BIM, building fire response, ranking, visualization

Procedia PDF Downloads 133
1719 Identification and Characterization of Groundwater Recharge Sites in Kuwait

Authors: Dalal Sadeqi

Abstract:

Groundwater is an important component of Kuwait’s water resources. Although limited in quantity and often poor in quality, the significance of this natural source of water cannot be overemphasized. Recharge of groundwater in Kuwait occurs during periodical storm events, especially in open desert areas. Runoff water dissolves accumulated surficial meteoric salts and subsequently leaches them into the groundwater following a period of evaporative enrichment at or near the soil surface. Geochemical processes governing groundwater recharge vary in time and space. Stable isotope (18O and 2H) and geochemical signatures are commonly used to gain some insight into recharge processes and groundwater salinization mechanisms, particularly in arid and semiarid regions. This article addresses the mechanism used in identifying and characterizing the main water shed areas in Kuwait using stable isotopes in an attempt to determine favorable groundwater recharge sites in the country. Stable isotopes of both rainwater and groundwater were targeted in different hydrogeological settings. Additionally, data and information obtained from subsurface logs in the study area were collected and analyzed to develop a better understanding of the lateral and vertical extent of the groundwater aquifers. Geographic Information System (GIS) and RockWorks 3D modelling software were used to map out the hydrogeomorphology of the study area and the subsurface lithology of the investigated aquifers. The collected data and information, including major ion chemistry, isotopes, subsurface characteristics, and hydrogeomorphology, were integrated in a GIS platform to identify and map out suitable natural recharge areas as part of an integrated water resources management scheme that addresses the challenges of the sustainability of the groundwater reserves in the country.

Keywords: scarcity, integrated, recharge, isotope

Procedia PDF Downloads 115
1718 Beneficiation of Pulp and Paper Mill Sludge for the Generation of Single Cell Protein for Fish Farming

Authors: Lucretia Ramnath

Abstract:

Fishmeal is extensively used for fish farming but is an expensive fish feed ingredient. A cheaper alternate to fishmeal is single cell protein (SCP) which can be cultivated on fermentable sugars recovered from organic waste streams such as pulp and paper mill sludge (PPMS). PPMS has a high cellulose content, thus is suitable for glucose recovery through enzymatic hydrolysis but is hampered by lignin and ash. To render PPMS amenable for enzymatic hydrolysis, the PPMS waspre-treated to produce a glucose-rich hydrolysate which served as a feed stock for the production of fungal SCP. The PPMS used in this study had the following composition: 72.77% carbohydrates, 8.6% lignin, and 18.63% ash. The pre-treatments had no significant effect on lignin composition but had a substantial effect on carbohydrate and ash content. Enzymatic hydrolysis of screened PPMS was previously optimized through response surface methodology (RSM) and 2-factorial design. The optimized protocol resulted in a hydrolysate containing 46.1 g/L of glucose, of which 86% was recovered after downstream processing by passing through a 100-mesh sieve (38 µm pore size). Vogel’s medium supplemented with 10 g/L hydrolysate successfully supported the growth of Fusarium venenatum, conducted using standard growth conditions; pH 6, 200 rpm, 2.88 g/L ammonium phosphate, 25°C. A maximum F. venenatum biomass of 45 g/L was produced with a yield coefficient of 4.67. Pulp and paper mill sludge hydrolysate contained approximately five times more glucose than what was needed for SCP production and served as a suitable carbon source. We have shown that PPMS can be successfully beneficiated for SCP production.

Keywords: pulp and paper waste, fungi, single cell protein, hydrolysate

Procedia PDF Downloads 207
1717 Automated Fact-Checking by Incorporating Contextual Knowledge and Multi-Faceted Search

Authors: Wenbo Wang, Yi-Fang Brook Wu

Abstract:

The spread of misinformation and disinformation has become a major concern, particularly with the rise of social media as a primary source of information for many people. As a means to address this phenomenon, automated fact-checking has emerged as a safeguard against the spread of misinformation and disinformation. Existing fact-checking approaches aim to determine whether a news claim is true or false, and they have achieved decent veracity prediction accuracy. However, the state-of-the-art methods rely on manually verified external information to assist the checking model in making judgments, which requires significant human resources. This study introduces a framework, SAC, which focuses on 1) augmenting the representation of a claim by incorporating additional context using general-purpose, comprehensive, and authoritative data; 2) developing a search function to automatically select relevant, new, and credible references; 3) focusing on the important parts of the representations of a claim and its reference that are most relevant to the fact-checking task. The experimental results demonstrate that 1) Augmenting the representations of claims and references through the use of a knowledge base, combined with the multi-head attention technique, contributes to improved performance of fact-checking. 2) SAC with auto-selected references outperforms existing fact-checking approaches with manual selected references. Future directions of this study include I) exploring knowledge graphs in Wikidata to dynamically augment the representations of claims and references without introducing too much noise, II) exploring semantic relations in claims and references to further enhance fact-checking.

Keywords: fact checking, claim verification, deep learning, natural language processing

Procedia PDF Downloads 62
1716 Influence of Some Chemical Drinking Water Parameters on Germ Count in Nalout Region, Libya

Authors: Dukali Abujnah, Mokhtar Blgacem Halbuda

Abstract:

Water is one of the world's natural resources. It is an essential source for the maintenance of human, animal, and plant life. It has a significant impact on the country's economy and all human activities. Over the past twenty years, pressure on water resources has increased due to population and industrial growth and increasing demand for agricultural and household products, which has become a major concern of the international community. The aim of this study is the physical and bacteriological analysis of drinking water in the city of Value. The study covered different locations in the city. Thirty-six groundwater samples were taken from wells and various tanks owned by the State and private wells, and the Ain Thalia spring and other samples were taken from underground water tanks. It fills up with rainwater during the rainy season. These samples were analyzed for their physical, chemical, and biological status and the results were compared to Libyan and World Health Organization drinking water specifications to assess the quality of drinking water in the city of Value. Physical and chemical analysis of water samples showed acceptable values for acidity and electrical conductivity, and turbidity was found in water samples collected from underground reservoirs compared to Libyan and World Health Organization standards. The highest levels of electrical conductivity and alkalinity, TDS, and water hardness in the samples collected were below the maximum acceptable levels for drinking water as recommended by Libyan and World Health Organization specifications. The biological test results also showed that the water samples were free of intestinal bacteria.

Keywords: quality, agriculture, region, reservoir, evaluation

Procedia PDF Downloads 91
1715 Assessment of the Fertility Status of the Fadama Soils Found along Five Major River Catchments in Kano

Authors: Garba K. Adamu

Abstract:

This research was carried out in the catchments of five major rivers in Kano State. The catchments have considerable Fadama lands; these include: River Gari which is located in the northwestern part of Kano state, Rivers Challawa and Watari from southernparts of Kano and Katsina states. River Tomas from the northern parts of Kano state, River Jakara which has its source from the Old Kano city, part of Central Business Districts and Industrial Estates. The study was carried out with aim of assessing the fertility status of the Fadama soils found in these major river catchments. A transect was designed to collect samples along farming villages in the five river channels for the study. The findings indicate that the soils are predominantly sandy. The bulk density values vary significantly and range from 0.98mg/m to 1.36mg/m. The pH values for all the sites studied ranges from slightly acidic to slightly alkaline. The OC ranged from low to very low in the sites. The EC ranges from 66.3µs/cm to 198µs/cm for all the sites. The mean CEC ranges from 3.864 cm/kg to 10.114 Cmol/kg. The range of values for the SAR was 0.0106 to 0.069. Nitrogen ranges from0.03 to 0.1230ppm. The range of P value fell between 9.9 to 41.1mg/kg.Ca values ranges from 1.0170 to 14.9850 and K values ranges from 4.6550 – 64.40.Mg values range from 0.1380 to 1.8580 and Zn values range from 1.0170 to 14.9850. The Fe values ranged from 15.6500mg/kg to 69.8000mg/kg. The B values range from0.2060 to13.5450. Generally, the values obtained shows a low to medium fertility levels for all the parameters tested and the areas will require the in cooperation of organic manure and chemical fertilizers to improve soil structure and supplements other macro nutrients.

Keywords: assessment, Fadama soils, fertility status, river catchment

Procedia PDF Downloads 323
1714 Biochar - A Multi-Beneficial and Cost-Effective Amendment to Clay Soil for Stormwater Runoff Treatment

Authors: Mohammad Khalid, Mariya Munir, Jacelyn Rice Boyaue

Abstract:

Highways are considered a major source of pollution to storm-water, and its runoff can introduce various contaminants, including nutrients, Indicator bacteria, heavy metals, chloride, and phosphorus compounds, which can have negative impacts on receiving waters. This study assessed the ability of biochar for contaminants removal and to improve the water holding capacity of soil biochar mixture. For this, ten commercially available biochar has been strategically selected. Lab scale batch testing was done at 3% and 6% by the weight of the soil to find the preliminary estimate of contaminants removal along with hydraulic conductivity and water retention capacity. Furthermore, from the above-conducted studies, six best performing candidate and an application rate of 6% has been selected for the column studies. Soil biochar mixture was filled in 7.62 cm assembled columns up to a fixed height of 76.2 cm based on hydraulic conductivity. A total of eight column experiments have been conducted for nutrient, heavy metal, and indicator bacteria analysis over a period of one year, which includes a drying as well as a deicing period. The saturated hydraulic conductivity was greatly improved, which is attributed to the high porosity of the biochar soil mixture. Initial data from the column testing shows that biochar may have the ability to significantly remove nutrients, indicator bacteria, and heavy metals. The overall study demonstrates that biochar could be efficiently applied with clay soil to improve the soil's hydraulic characteristics as well as remove the pollutants from the stormwater runoff.

Keywords: biochar, nutrients, indicator bacteria, storm-water treatment, sustainability

Procedia PDF Downloads 121
1713 Long Wavelength Coherent Pulse of Sound Propagating in Granular Media

Authors: Rohit Kumar Shrivastava, Amalia Thomas, Nathalie Vriend, Stefan Luding

Abstract:

A mechanical wave or vibration propagating through granular media exhibits a specific signature in time. A coherent pulse or wavefront arrives first with multiply scattered waves (coda) arriving later. The coherent pulse is micro-structure independent i.e. it depends only on the bulk properties of the disordered granular sample, the sound wave velocity of the granular sample and hence bulk and shear moduli. The coherent wavefront attenuates (decreases in amplitude) and broadens with distance from its source. The pulse attenuation and broadening effects are affected by disorder (polydispersity; contrast in size of the granules) and have often been attributed to dispersion and scattering. To study the effect of disorder and initial amplitude (non-linearity) of the pulse imparted to the system on the coherent wavefront, numerical simulations have been carried out on one-dimensional sets of particles (granular chains). The interaction force between the particles is given by a Hertzian contact model. The sizes of particles have been selected randomly from a Gaussian distribution, where the standard deviation of this distribution is the relevant parameter that quantifies the effect of disorder on the coherent wavefront. Since, the coherent wavefront is system configuration independent, ensemble averaging has been used for improving the signal quality of the coherent pulse and removing the multiply scattered waves. The results concerning the width of the coherent wavefront have been formulated in terms of scaling laws. An experimental set-up of photoelastic particles constituting a granular chain is proposed to validate the numerical results.

Keywords: discrete elements, Hertzian contact, polydispersity, weakly nonlinear, wave propagation

Procedia PDF Downloads 204
1712 Screening of Potential Cytotoxic Activities of Some Medicinal Plants of Saudi Arabia

Authors: Syed Farooq Adil, Merajuddinkhan, Mujeeb Khan, Hamad Z. Alkhathlan

Abstract:

Phytochemicals from plant extracts belong to an important source of natural products which have demonstrated excellent cytotoxic activities. However, plants of different origins exhibit diverse chemical compositions and bioactivities. Therefore, the discovery of plants based new anticancer agents from different parts of the world is always challenging. In this study, methanolic extracts of different parts of 11 plants from Saudi Arabia have been tested in vitro for their anticancer potential on human liver cancer cell line (HepG2). Particularly, for this study, plants from Asteraceae, Resedaceae, and Polygonaceae families were chosen on the basis of locally available ethnobotanical data and their medicinal properties. Among 12 tested extract samples, three samples obtained from Artemisia monosperma stem, Ochradenus baccatus aerial parts, and Pulicaria glutinosa stem have demonstrated interesting cytotoxic activities with a cell viability of 29.3%, 28.4% and 24.2%, respectively. Whereas, four plant extracts including Calendula arvensis aerial parts, Scorzonera musilii whole plant, A. monosperma leaves show moderate anticancer properties bearing a cell viability ranging from 11.9 to 16.7%. The remaining extracts have shown poor cytotoxic activities. Subsequently, GC-MS analysis of methanolic extracts of the four most active plants extracts such as C. comosum, O. baccatus, P. glutinosa and A. monosperma detected the presence of 41 phytomolecules. Among which 3-(4-hydroxyphenyl) propionitrile (1), 8,11-octadecadiynoic acid methyl ester (2), 6,7-dimethoxycoumarin (3), and 1-(2-hydroxyphenyl) ethenone (4) were found to be the lead compounds of C. comosum, O. baccatus P. glutinosa and A. monosperma, respectively.

Keywords: medicinal plants, asteraceae, polygonaceae, hepg2

Procedia PDF Downloads 127
1711 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: climate change, downscaling, GCM, RCM

Procedia PDF Downloads 406
1710 Digital Transformation and Environmental Disclosure in Industrial Firms: The Moderating Role of the Top Management Team

Authors: Yongxin Chen, Min Zhang

Abstract:

As industrial enterprises are the primary source of national pollution, environmental information disclosure is a crucial way to demonstrate to stakeholders the work they have done in fulfilling their environmental responsibilities and accepting social supervision. In the era of the digital economy, many companies, actively embracing the opportunities that come with digital transformation, have begun to apply digital technology to information collection and disclosure within the enterprise. However, less is known about the relationship between digital transformation and environmental disclosure. This study investigates how enterprise digital transformation affects environmental disclosure in 643 Chinese industrial companies, according to information processing theory. What is intriguing is that the depth (size) and breadth (diversity) of environmental disclosure linearly increase with the rise in the collection, processing, and analytical capabilities in the digital transformation process. However, the volume of data will grow exponentially, leading to a marginal increase in the economic and environmental costs of utilizing, storing, and managing data. In our empirical findings, linearly increasing benefits and marginal costs create a unique inverted U-shaped relationship between the degree of digital transformation and environmental disclosure in the Chinese industrial sector. Besides, based on the upper echelons theory, we also propose that the top management team with high stability and managerial capabilities will invest more effort and expense into improving environmental disclosure quality, lowering the carbon footprint caused by digital technology, maintaining data security etc. In both these contexts, the increasing marginal cost curves would become steeper, weakening the inverted U-shaped slope between DT and ED.

Keywords: digital transformation, environmental disclosure, the top management team, information processing theory, upper echelon theory

Procedia PDF Downloads 142
1709 Shaking the Iceberg: Metaphoric Shifting and Loss in the German Translations of 'The Sun Also Rises'

Authors: Christopher Dick

Abstract:

While the translation of 'literal language' poses numerous challenges for the translator, the translation of 'figurative language' creates even more complicated issues. It has been only in the last several decades that scholars have attempted to propose theories of figurative language translation, including metaphor translation. Even less work has applied these theories to metaphoric translation in literary texts. And almost no work has linked an analysis of metaphors in translation with the recent scholarship on conceptual metaphors. A study of literature in translation must not only examine the inevitable shifts that occur as specific metaphors move from source language to target language but also analyze the ways in which these shifts impact conceptual metaphors and, ultimately, the text as a whole. Doing so contributes to on-going efforts to bridge the sometimes wide gulf between considerations of content and form in literary studies. This paper attempts to add to the body of scholarly literature on metaphor translation and the function of metaphor in a literary text. Specifically, the study examines the metaphoric expressions in Hemingway’s The Sun Also Rises. First, the issue of Hemingway and metaphor is addressed. Next, the study examines the specific metaphors in the original novel in English and the German translations, first in Annemarie Horschitz’s 1928 German version and then in the recent Werner Schmitz 2013 translation. Hemingway’s metaphors, far from being random occurrences of figurative language, are linguistic manifestations of deeper conceptual metaphors that are central to an interpretation of the text. By examining the modifications that are made to these original metaphoric expressions as they are translated into German, one can begin to appreciate the shifts involved with metaphor translation. The translation of Hemingway’s metaphors into German represents significant metaphoric loss and shifting that subsequently shakes the important conceptual metaphors in the novel.

Keywords: Hemingway, Conceptual Metaphor, Translation, Stylistics

Procedia PDF Downloads 356
1708 Impact of Human Resources Accounting on Employees' Performance in Organization

Authors: Hamid Saremi, Shida Hanafi

Abstract:

In an age of technology and economics, human capital has important and axial role in the organization and human resource accounting has a wide perception to key resources of organization i.e. human resources. Human resources accounting is new branch of accounting that has Short-lived and generally deals to a range of policies and measures that are related to various aspects of human resources and It gives importance to an organization's most important asset is its human resources and human resource management is the key to success in an organization and to achieve this important matter must review and evaluation of human resources data be with knowledge of accounting based on empirical studies and methods of measurement and reporting of human resources accounting information. Undoubtedly human resource management without information cannot be done and take decision and human resources accounting is practical way to inform the decision makers who are committed to harnessing human resources,, human resources accounting with applying accounting principles in the organization and is with conducting basic research on the extent of the of human resources accounting information" effect of employees' personal performance. In human resource accounting analysis and criteria and valuation of cost and manpower valuating is as the main resource in each Institute. Protection of human resources is a process that according to human resources accounting is for organization profitability. In fact, this type of accounting can be called as a major source in measurement and trends of costs and human resources valuation in each institution. What is the economic value of such assets? What is the amount of expenditures for education and training of professional individuals to value in asset account? What amount of funds spent should be considered as lost opportunity cost? In this paper, according to the literature of human resource accounting we have studied the human resources matter and its objectives and topic of the importance of human resource valuation on employee performance review and method of reporting of human resources according to different models.

Keywords: human resources, human resources, accounting, human capital, human resource management, valuation and cost of human resources, employees, performance, organization

Procedia PDF Downloads 548
1707 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)

Authors: Salvatore Luongo, Carlo Luongo

Abstract:

This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilities

Keywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification

Procedia PDF Downloads 285
1706 Quantification of Soft Tissue Artefacts Using Motion Capture Data and Ultrasound Depth Measurements

Authors: Azadeh Rouhandeh, Chris Joslin, Zhen Qu, Yuu Ono

Abstract:

The centre of rotation of the hip joint is needed for an accurate simulation of the joint performance in many applications such as pre-operative planning simulation, human gait analysis, and hip joint disorders. In human movement analysis, the hip joint center can be estimated using a functional method based on the relative motion of the femur to pelvis measured using reflective markers attached to the skin surface. The principal source of errors in estimation of hip joint centre location using functional methods is soft tissue artefacts due to the relative motion between the markers and bone. One of the main objectives in human movement analysis is the assessment of soft tissue artefact as the accuracy of functional methods depends upon it. Various studies have described the movement of soft tissue artefact invasively, such as intra-cortical pins, external fixators, percutaneous skeletal trackers, and Roentgen photogrammetry. The goal of this study is to present a non-invasive method to assess the displacements of the markers relative to the underlying bone using optical motion capture data and tissue thickness from ultrasound measurements during flexion, extension, and abduction (all with knee extended) of the hip joint. Results show that the artefact skin marker displacements are non-linear and larger in areas closer to the hip joint. Also marker displacements are dependent on the movement type and relatively larger in abduction movement. The quantification of soft tissue artefacts can be used as a basis for a correction procedure for hip joint kinematics.

Keywords: hip joint center, motion capture, soft tissue artefact, ultrasound depth measurement

Procedia PDF Downloads 281
1705 Effects of Adding Sodium Nitroprusside in Semen Diluents on Motility, Viability and Lipid Peroxidation of Sperm of Holstein Bulls

Authors: Leila Karshenas, Hamid Reza Khodaei, Behnaz Mahdavi

Abstract:

We know that nitric oxide (NO) plays an important role in all sexual activities of animals. It is made in body from NO synthase enzyme and L-arginin molecule. NO can bound with sulfur-iron complexes and because production of steroid sexual hormones is related to enzymes which have this complex, NO can change the activity of these enzymes. NO affects many cells including endothelial cells of veins, macrophages and mast cells. These cells are found in testis leydig cells and therefore are important source of NO in testis tissue. Minimizing damages to sperm at the time of sperm freezing and thawing is really important. The goal of this study was to determine the function of NO before freezing and its effects on quality and viability of sperms after thawing and incubation. 4 Holstein bulls were selected from the age of 4, and artificial insemination was done for 3 weeks (2 times a week). Treatments were 0, 10, 50 and 100 nm of sodium nitroprusside (SNP). Data analysis was performed by SAS98 program. Also, mean comparison was done using Duncan's multiple ranges test (P<0.05). Concentrations used was found to increase motility and viability of spermatozoa at 1, 2 and 3 hours after thawing significantly (P<0.05), but there was no significant difference at zero time. SNP levels reduced the amount of lipid peroxidation in sperm membrane, increased acrosome health and improved sample membranes especially in 50 and 100 nm treatments. According to results, adding SNP to semen diluents increases motility and viability of spermatozoa. Also, it reduces lipid peroxidation in sperm membrane and improves sperm function.

Keywords: sperm motility, nitric oxide, lipid peroxidation, spermatozoa

Procedia PDF Downloads 361
1704 High-Frequency Modulation of Light-Emitting Diodes for New Ultraviolet Communications

Authors: Meng-Chyi Wu, Bonn Lin, Jyun-Hao Liao, Chein-Ju Chen, Yu-Cheng Jhuang, Mau-Phon Houng, Fang-Hsing Wang, Min-Chu Liu, Cheng-Fu Yang, Cheng-Shong Hong

Abstract:

Since the use of wireless communications has become critical nowadays, the available RF spectrum has become limited. Ultraviolet (UV) communication system can alleviate the spectrum constraint making UV communication system a potential alternative to future communication demands. Also, UV links can provide faster communication rate and can be used in combination with existing RF communication links, providing new communications diversity with higher user capacity. The UV region of electromagnetic spectrum has been of interest to detector, imaging and communication technologies because the stratospheric ozone layer effectively absorbs some solar UV radiation from reaching the earth surface. The wavebands where most of UV radiation is absorbed by the ozone are commonly known as the solar blind region. By operating in UV-C band (200-280 nm) the communication system can minimize the transmission power consumption since it will have less radiation noise. UV communication uses the UV ray as the medium. Electric signal is carried on this band after being modulated and then be transmitted within the atmosphere as channel. Though the background noise of UV-C communication is very low owing to the solar-blind feature, it leads to a large propagation loss. The 370 nm UV provides a much lower propagation loss than that the UV-C does and the recent device technology for UV source on this band is more mature. The fabricated 370 nm AlGaN light-emitting diodes (LEDs) with an aperture size of 45 m exhibit a modulation bandwidth of 165 MHz at 30 mA and a high power of 7 W/cm2 at 230 A/cm2. In order to solve the problem of low power in single UV LED, a UV LED array is presented in.

Keywords: ultraviolet (UV) communication, light-emitting diodes (LEDs), modulation bandwidth, LED array, 370 nm

Procedia PDF Downloads 414
1703 Prevalence and Associated Factors of Stunting among 6-59 Months Children in Pastoral Community of Korahay Zone, Somali Regional State, Ethiopia 2016

Authors: Sisay Shine, Frew Tadesse, Zemenu Shiferaw, Lema Mideksa

Abstract:

Background: Stunting is one of the most important public health problems in Ethiopia with an estimated 44.4% of children less than five years of age are stunted. Thus, this study aimed to assess prevalence and associated factors of stunting among 6-59 months children in pastoral community of Korahay Zone, Somali Regional State, Ethiopia. Objective of the study: To assess prevalence and associated factors of stunting among 6-59 months children in pastoral community of Korahay Zone, Somali Regional State, Ethiopia, 2016. Methods: Community based cross sectional study design was done among 770 children in pastoral community of Korahay Zone. Systematic sampling techniques were used to select households and took child mother pair from each selected households. Data was collected using pre-tested and structured questionnaire. Odds ratio with 95% confidence interval was used to assess level of significance. Result: Prevalence of stunting among 6-59 months age children was 31.9%. Sex (AOR: 1.47, 95%CI 1.02, 2.11), age (AOR: 2.10, 95%CI 1.16, 3.80), maternal education (AOR: 3.42, 95%CI 1.58, 7.41), maternal occupation (AOR: 3.10, 95%CI 1.85, 5.19), monthly income (AOR: 1.47, 95%CI 1.03, 2.09), PNC visits (AOR: 1.59, 95%CI 1.07, 2.37), source of water (AOR: 3.41, 95%CI 1.96, 5.93), toilet availability (AOR: 1.71, 95%CI 1.13, 2.58), first milk feeding (AOR: 3.37, 95%CI 2.27, 5.02) and bottle feeding (AOR: 2.07, 95%CI 1.34, 3.18) were significant predictors of stunting. Conclusion and recommendations: Prevalence of stunting among 6-59 months children was high 31.9%. Lack maternal education, not feeding first milk, unsafe water supply, absence toilet availability and bottle feeding can increase the risk of stunting. So, educating mothers on child feeding practice, sanitation and important of first milk can reduce stunting.

Keywords: dietary, environmental, healthcare, socio-demographic, stunting

Procedia PDF Downloads 576
1702 Computational Fluid Dynamics Simulations and Analysis of Air Bubble Rising in a Column of Liquid

Authors: Baha-Aldeen S. Algmati, Ahmed R. Ballil

Abstract:

Multiphase flows occur widely in many engineering and industrial processes as well as in the environment we live in. In particular, bubbly flows are considered to be crucial phenomena in fluid flow applications and can be studied and analyzed experimentally, analytically, and computationally. In the present paper, the dynamic motion of an air bubble rising within a column of liquid is numerically simulated using an open-source CFD modeling tool 'OpenFOAM'. An interface tracking numerical algorithm called MULES algorithm, which is built-in OpenFOAM, is chosen to solve an appropriate mathematical model based on the volume of fluid (VOF) numerical method. The bubbles initially have a spherical shape and starting from rest in the stagnant column of liquid. The algorithm is initially verified against numerical results and is also validated against available experimental data. The comparison revealed that this algorithm provides results that are in a very good agreement with the 2D numerical data of other CFD codes. Also, the results of the bubble shape and terminal velocity obtained from the 3D numerical simulation showed a very good qualitative and quantitative agreement with the experimental data. The simulated rising bubbles yield a very small percentage of error in the bubble terminal velocity compared with the experimental data. The obtained results prove the capability of OpenFOAM as a powerful tool to predict the behavior of rising characteristics of the spherical bubbles in the stagnant column of liquid. This will pave the way for a deeper understanding of the phenomenon of the rise of bubbles in liquids.

Keywords: CFD simulations, multiphase flows, OpenFOAM, rise of bubble, volume of fluid method, VOF

Procedia PDF Downloads 123
1701 Photo-Degradation Black 19 Dye with Synthesized Nano-Sized ZnS

Authors: M. Tabatabaee, R. Mohebat, M. Baranian

Abstract:

Textile industries produce large volumes of colored dye effluents which are toxic and non-biodegradable. Earlier studies have shown that a wide range of organic substrates can be completely photo mineralized in the presence of photocatalysts and oxidant agents. ZnO and TiO2 are important photocatalysts with high catalytic activity that have attracted much research attention. Zinc sulfide is one of the semiconductor nanomaterials that can be used for the production of optical sensitizers, photocatalysts, electroluminescent materials, optical sensors and for solar energy conversion. The synthesis of ZnS nanoparticles has been tried by various methods and sulfide sources. Elementary sulfur powder, H2S or Na2S are used as sulfide sources for synthesis of ZnS nano particles. Recently, solar energy is has been successfully used for photocatalytic degradation of dye pollutant. Studies have shown that the use of metal oxides or sulfides with ZnO or TiO2 can significantly enhance the photocatalytic activity of them. In this research, Nano-sized zinc sulfide was synthesized successfully by a simple method using thioasetamide as sulfide source in the presence of polyethylene glycol (PEG 2000). X-ray diffraction (XRD) spectroscopy scanning electron microscope (SEM) was used to characterize the structure and morphology synthesized powder. The effect of photocatalytic activity of prepared ZnS and ZnS/ZnO, on degradation of direct Black19 under UV and sunlight irradiation was investigated. The effects of various parameters such as amount of photocatalyst, pH, initial dye concentration and irradiation time on decolorization rate were systematically investigated. Results show that more than 80% of 500 mgL-1 of dye decolorized in 60-min reaction time under UV and solar irradiation in the presence of ZnS nanoparticles. Whereas, mixed ZnS/ZnO (50%) can decolorize more than 80% of dye in the same conditions.

Keywords: zinc sulfide, nano articles, photodegradation, solar light

Procedia PDF Downloads 404
1700 Assessment Using Copulas of Simultaneous Damage to Multiple Buildings Due to Tsunamis

Authors: Yo Fukutani, Shuji Moriguchi, Takuma Kotani, Terada Kenjiro

Abstract:

If risk management of the assets owned by companies, risk assessment of real estate portfolio, and risk identification of the entire region are to be implemented, it is necessary to consider simultaneous damage to multiple buildings. In this research, the Sagami Trough earthquake tsunami that could have a significant effect on the Japanese capital region is focused on, and a method is proposed for simultaneous damage assessment using copulas that can take into consideration the correlation of tsunami depths and building damage between two sites. First, the tsunami inundation depths at two sites were simulated by using a nonlinear long-wave equation. The tsunamis were simulated by varying the slip amount (five cases) and the depths (five cases) for each of 10 sources of the Sagami Trough. For each source, the frequency distributions of the tsunami inundation depth were evaluated by using the response surface method. Then, Monte-Carlo simulation was conducted, and frequency distributions of tsunami inundation depth were evaluated at the target sites for all sources of the Sagami Trough. These are marginal distributions. Kendall’s tau for the tsunami inundation simulation at two sites was 0.83. Based on this value, the Gaussian copula, t-copula, Clayton copula, and Gumbel copula (n = 10,000) were generated. Then, the simultaneous distributions of the damage rate were evaluated using the marginal distributions and the copulas. For the correlation of the tsunami inundation depth at the two sites, the expected value hardly changed compared with the case of no correlation, but the damage rate of the ninety-ninth percentile value was approximately 2%, and the maximum value was approximately 6% when using the Gumbel copula.

Keywords: copulas, Monte-Carlo simulation, probabilistic risk assessment, tsunamis

Procedia PDF Downloads 143
1699 Economic Assessment of CO2-Based Methane, Methanol and Polyoxymethylene Production

Authors: Wieland Hoppe, Nadine Wachter, Stefan Bringezu

Abstract:

Carbon dioxide (CO2) utilization might be a promising way to substitute fossil raw materials like coal, oil or natural gas as carbon source of chemical production. While first life cycle assessments indicate a positive environmental performance of CO2-based process routes, a commercialization of CO2 is limited by several economic obstacles up to now. We, therefore, analyzed the economic performance of the three CO2-based chemicals methane and methanol as basic chemicals and polyoxymethylene as polymer on a cradle-to-gate basis. Our approach is oriented towards life cycle costing. The focus lies on the cost drivers of CO2-based technologies and options to stimulate a CO2-based economy by changing regulative factors. In this way, we analyze various modes of operation and give an outlook for the potentially cost-effective development in the next decades. Biogas, waste gases of a cement plant, and flue gases of a waste incineration plant are considered as CO2-sources. The energy needed to convert CO2 into hydrocarbons via electrolysis is assumed to be supplied by wind power, which is increasingly available in Germany. Economic data originates from both industrial processes and process simulations. The results indicate that CO2-based production technologies are not competitive with conventional production methods under present conditions. This is mainly due to high electricity generation costs and regulative factors like the German Renewable Energy Act (EEG). While the decrease in production costs of CO2-based chemicals might be limited in the next decades, a modification of relevant regulative factors could potentially promote an earlier commercialization.

Keywords: carbon capture and utilization (CCU), economic assessment, life cycle costing (LCC), power-to-X

Procedia PDF Downloads 291
1698 Alternative of Lead-Based Ionization Radiation Shielding Property: Epoxy-Based Composite Design

Authors: Md. Belal Uudin Rabbi, Sakib Al Montasir, Saifur Rahman, Niger Nahid, Esmail Hossain Emon

Abstract:

The practice of radiation shielding protects against the detrimental effects of ionizing radiation. Radiation shielding depletes radiation by inserting a shield of absorbing material between any radioactive source. It is a primary concern when building several industrial fields, so using potent (high activity) radioisotopes in food preservation, cancer treatment, and particle accelerator facilities is significant. Radiation shielding is essential for radiation-emitting equipment users to reduce or mitigate radiation damage. Polymer composites (especially epoxy based) with high atomic number fillers can replace toxic Lead in ionizing radiation shielding applications because of their excellent mechanical properties, superior solvent and chemical resistance, good dimensional stability, adhesive, and less toxic. Due to being lightweight, good neutron shielding ability in almost the same order as concrete, epoxy-based radiation shielding can be the next big thing. Micro and nano-particles for the epoxy resin increase the epoxy matrix's radiation shielding property. Shielding is required to protect users of such facilities from ionizing radiation as recently, and considerable attention has been paid to polymeric composites as a radiation shielding material. This research will examine the radiation shielding performance of epoxy-based nano-WO3 reinforced composites, exploring the performance of epoxy-based nano-WO3 reinforced composites. The samples will be prepared using the direct pouring method to block radiation. The practice of radiation shielding protects against the detrimental effects of ionizing radiation.

Keywords: radiation shielding materials, ionizing radiation, epoxy resin, Tungsten oxide, polymer composites

Procedia PDF Downloads 114
1697 Strategies for a Sustainable Future of Forest and Tribal Peoples on This Planet

Authors: Dharmpal Singh

Abstract:

The objective of this proposed project is to relocation and resettlement of carnivores tribal communities who are currently residing in the protected forest land in all over the world just like resettlement project of the carnivores tribal families of Mongia who at past were residing in Ranthambhore Tiger Reserve (RTR) and had caused excess damage of endangered species of wildlife including Tigers. At present several tribal communities are residing in the another national parks and they not only consuming the wild animals but also involved in illegal trading of vital organs, skin and bones with National and international traders. Tribal are ideally suited for the job because they are highly skilled game trackers and due to having had a definite source of income over the years, they easily drawn in to the illegal wildlife trade and slaughter of wild animals. Their income is increasing but wild animals are on the brink of extinction. For the conservation of flora and fauna the rehabilitation process should be thought out according to the RTR project (which not only totally change the quality of life of mongia tribal community but also increased the conopy cover of forest and grass due to reduced the biotic pressure on protected land of forest in Rajasthan state) with appropriate understanding of the sociology of the people involved, their culture, education standard and the need of different skills to be acquired by them for sustenance such as agriculture, dairy, poultry, social forestry, job as forest guard and others eco-development programmes. Perhaps, the dimensions presented by me may generate discussion among the international wild life lovers and conservationists and remedies may be result oriented in the field of management of forest and conservation of wildlife on this planet.

Keywords: strategies, rehablety of tribals, conservation of forest, eco-development Programmes, wildlife

Procedia PDF Downloads 436
1696 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 455
1695 National Accreditation Board for Hospitals and Healthcare Reaccreditation, the Challenges and Advantages: A Qualitative Case Study

Authors: Narottam Puri, Gurvinder Kaur

Abstract:

Background: The National Accreditation Board for Hospitals & Healthcare Providers (NABH) is India’s apex standard setting accrediting body in health care which evaluates and accredits healthcare organizations. NABH requires accredited organizations to become reaccredited every three years. It is often though that once the initial accreditation is complete, the foundation is set and reaccreditation is a much simpler process. Fortis Hospital, Shalimar Bagh, a part of the Fortis Healthcare group is a 262 bed, multi-specialty tertiary care hospital. The hospital was successfully accredited in the year 2012. On completion of its first cycle, the hospital underwent a reaccreditation assessment in the year 2015. This paper aims to gain a better understanding of the challenges that accredited hospitals face when preparing for a renewal of their accreditations. Methods: The study was conducted using a cross-sectional mixed methods approach; semi-structured interviews were conducted with senior leadership team and staff members including doctors and nurses. Documents collated by the QA team while preparing for the re-assessment like the data on quality indicators: the method of collection, analysis, trending, continual incremental improvements made over time, minutes of the meetings, amendments made to the existing policies and new policies drafted was reviewed to understand the challenges. Results: The senior leadership had a concern about the cost of accreditation and its impact on the quality of health care services considering the staff effort and time consumed it. The management was however in favor of continuing with the accreditation since it offered competitive advantage, strengthened community confidence besides better pay rates from the payors. The clinicians regarded it as an increased non-clinical workload. Doctors felt accountable within a professional framework, to themselves, the patient and family, their peers and to their profession; but not to accreditation bodies and raised concerns on how the quality indicators were measured. The departmental leaders had a positive perception of accreditation. They agreed that it ensured high standards of care and improved management of their functional areas. However, they were reluctant in sparing people for the QA activities due to staffing issues. With staff turnover, a lot of work was lost as sticky knowledge and had to be redone. Listing the continual quality improvement initiatives over the last 3 years was a challenge in itself. Conclusion: The success of any quality assurance reaccreditation program depends almost entirely on the commitment and interest of the administrators, nurses, paramedical staff, and clinicians. The leader of the Quality Movement is critical in propelling and building momentum. Leaders need to recognize skepticism and resistance and consider ways in which staff can become positively engaged. Involvement of all the functional owners is the start point towards building ownership and accountability for standards compliance. Creativity plays a very valuable role. Communication by Mail Series, WhatsApp groups, Quizzes, Events, and any and every form helps. Leaders must be able to generate interest and commitment without burdening clinical and administrative staff with an activity they neither understand nor believe in.

Keywords: NABH, reaccreditation, quality assurance, quality indicators

Procedia PDF Downloads 224
1694 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.

Keywords: climate change, ETP, MODIS, GIEC scenarios

Procedia PDF Downloads 100
1693 Optimization of a Bioremediation Strategy for an Urban Stream of Matanza-Riachuelo Basin

Authors: María D. Groppa, Andrea Trentini, Myriam Zawoznik, Roxana Bigi, Carlos Nadra, Patricia L. Marconi

Abstract:

In the present work, a remediation bioprocess based on the use of a local isolate of the microalgae Chlorella vulgaris immobilized in alginate beads is proposed. This process was shown to be effective for the reduction of several chemical and microbial contaminants present in Cildáñez stream, a water course that is part of the Matanza-Riachuelo Basin (Buenos Aires, Argentina). The bioprocess, involving the culture of the microalga in autotrophic conditions in a stirred-tank bioreactor supplied with a marine propeller for 6 days, allowed a significant reduction of Escherichia coli and total coliform numbers (over 95%), as well as of ammoniacal nitrogen (96%), nitrates (86%), nitrites (98%), and total phosphorus (53%) contents. Pb content was also significantly diminished after the bioprocess (95%). Standardized cytotoxicity tests using Allium cepa seeds and Cildáñez water pre- and post-remediation were also performed. Germination rate and mitotic index of onion seeds imbibed in Cildáñez water subjected to the bioprocess was similar to that observed in seeds imbibed in distilled water and significantly superior to that registered when untreated Cildáñez water was used for imbibition. Our results demonstrate the potential of this simple and cost-effective technology to remove urban-water contaminants, offering as an additional advantage the possibility of an easy biomass recovery, which may become a source of alternative energy.

Keywords: bioreactor, bioremediation, Chlorella vulgaris, Matanza-Riachuelo Basin, microalgae

Procedia PDF Downloads 250