Search results for: insurance estimation
279 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai
Authors: Raj Banerjee, Aniruddha Sengupta
Abstract:
An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum
Procedia PDF Downloads 179278 Human Wildlife Conflict Outside Protected Areas of Nepal: Causes, Consequences and Mitigation Strategies
Authors: Kedar Baral
Abstract:
This study was carried out in Mustang, Kaski, Tanahun, Baitadi, and Jhapa districts of Nepal. The study explored the spatial and temporal pattern of HWC, socio economic factors associated with it, impacts of conflict on life / livelihood of people and survival of wildlife species, and impact of climate change and forest fire onHWC. Study also evaluated people’s attitude towards wildlife conservation and assessed relevant policies and programs. Questionnaire survey was carried out with the 250 respondents, and both socio-demographic and HWC related information werecollected. Secondary information were collected from Divisional Forest Offices and Annapurna Conservation Area Project.HWC events were grouped by season /months/sites (forest type, distances from forest, and settlement), and the coordinates of the events were exported to ArcGIS. Collected data were analyzed using descriptive statistics in Excel and R Program. A total of 1465 events were recorded in 5 districts during 2015 and 2019. Out of that, livestock killing, crop damage, human attack, and cattle shed damage events were 70 %, 12%, 11%, and 7%, respectively. Among 151 human attack cases, 23 people were killed, and 128 were injured. Elephant in Terai, common leopard and monkey in Middle Mountain, and snow leopard in high mountains were found as major problematic animals. Common leopard attacks were found more in the autumn, evening, and on human settlement area. Whereas elephant attacks were found higher in winter, day time, and on farmland. Poor people farmers were found highly victimized, and they were losing 26% of their income due to crop raiding and livestock depredation. On the other hand, people are killing many wildlife in revenge, and this number is increasing every year. Based on the people's perception, climate change is causing increased temperature and forest fire events and decreased water sources within the forest. Due to the scarcity of food and water within forests, wildlife are compelled to dwell at human settlement area, hence HWC events are increasing. Nevertheless, more than half of the respondents were found positive about conserving entire wildlife species. Forests outside PAs are under the community forestry (CF) system, which restored the forest, improved the habitat, and increased the wildlife.However, CF policies and programs were found to be more focused on forest management with least priority on wildlife conservation and HWC mitigation. Compensation / relief scheme of government for wildlife damage was found some how effective to manage HWC, but the lengthy process, being applicable to the damage of few wildlife species and highly increasing events made it necessary to revisit. Based on these facts, the study suggest to carry out awareness generation activities to the poor farmers, linking the property of people with the insurance scheme, conducting habitat management activities within CF, promoting the unpalatable crops, improvement of shed house of livestock, simplifying compensation scheme and establishing a fund at the district level and incorporating the wildlife conservation and HWCmitigation programs in CF. Finally, the study suggests to carry out rigorous researches to understand the impacts of current forest management practices on forest, biodiversity, wildlife, and HWC.Keywords: community forest, conflict mitigation, wildlife conservation, climate change
Procedia PDF Downloads 115277 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species
Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel
Abstract:
Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis
Procedia PDF Downloads 82276 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 138275 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling
Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha
Abstract:
The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat
Procedia PDF Downloads 53274 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity
Authors: Anamika Sahu
Abstract:
The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.Keywords: MASW, mechanical, petrophysical, site characterization
Procedia PDF Downloads 83273 An Economic Study for Fish Production in Egypt
Authors: Manal Elsayed Elkheshin, Rasha Saleh Mansour, Mohamed Fawzy Mohamed Eldnasury, Mamdouh Elbadry Mohamed
Abstract:
This research Aims to identify the main factors affecting the production and the fish consumption in Egypt, through the econometric estimation for various forms functions of fish production and fish consumption during the period (1991-2014), as the aim of this research to forecast the production and the fish consumption in Egypt until 2020, through determine the best standard methods using (ARIMA).This research also aims to the economic feasibility of the production of fish in aquaculture farms study; investment cost and represents the value of land, buildings, equipment and irrigation. Aquaculture requires three types of fish (Tilapia, carp fish, and mullet fish), and the total area of the farm, about an acre. The annual Fish production from this project about 3.5 tons. The annual investment costs of about 50500 pounds, Find conclude that the project can repay the cost of their investments after about 4 years and 5 months, and therefore recommend the implementation of the project, and internal rate of return reached (IRR) of about 22.1%, where it is clear that the rate of large internal rate of return, and achieves pound invested in this project annual return is estimated at 22.1 pounds, more than the opportunity cost, so we recommend the need to implement the project.Recommendations:1. Increasing the fish agriculture to decrease the gap of animal protein. 2.Increasing the number of mechanism fishing boats, and the provision of transport equipped to maintain the quality of fish production. 3.Encourage and attract the local and foreign investments, providing advice to the investor on the aquaculture field. 4. Action newsletters awareness of the importance of these projects where these projects resulted in a net profit after recovery in less than five years, IRR amounted to about 23%, which is much more than the opportunity cost of a bank interest rate is about 7%, helping to create work and graduates opportunities, and contribute to the reduction of imports of the fish, and improve the performance of the food trade balance.Keywords: equation model, individual share, red meat, consumption, production, endogenous variable, exogenous variable, financial performance evaluates fish culture, feasibility study, fish production, aquaculture
Procedia PDF Downloads 367272 The Grade Six Pupils' Learning Styles and Their Achievements and Difficulties on Fractions Based on Kolb's Model
Authors: Faiza Abdul Latip
Abstract:
One of the ultimate goals of any nation is to produce competitive manpower and this includes Philippines. Inclination in the field of Mathematics has a significant role in achieving this goal. However, Mathematics, as considered by most people, is the most difficult subject matter along with its topics to learn. This could be manifested from the low performance of students in national and international assessments. Educators have been widely using learning style models in identifying the way students learn. Moreover, it could be the frontline in knowing the difficulties held by each learner in a particular topic specifically concepts pertaining to fractions. However, as what many educators observed, students show difficulties in doing mathematical tasks and in great degree in dealing with fractions most specifically in the district of Datu Odin Sinsuat, Maguindanao. This study focused on the Datu Odin Sinsuat district grade six pupils’ learning styles along with their achievements and difficulties in learning concepts on fractions. Five hundred thirty-two pupils from ten different public elementary schools of the Datu Odin Sinsuat districts were purposively used as the respondents of the study. A descriptive research using the survey method was employed in this study. Quantitative analysis on the pupils’ learning styles on the Kolb’s Learning Style Inventory (KLSI) and scores on the mathematics diagnostic test on fraction concepts were made using this method. The simple frequency and percentage counts were used to analyze the pupils’ learning styles and their achievements on fractions. To determine the pupils’ difficulties in fractions, the index of difficulty on every item was determined. Lastly, the Kruskal-Wallis Test was used in determining the significant difference in the pupils’ achievements on fractions classified by their learning styles. This test was set at 0.05 level of significance. The minimum H-Value of 7.82 was used to determine the significance of the test. The results revealed that the pupils of Datu Odin Sinsuat districts learn fractions in varied ways as they are of different learning styles. However, their achievements in fractions are low regardless of their learning styles. Difficulties in learning fractions were found most in the area of Estimation, Comparing/Ordering, and Division Interpretation of Fractions. Most of the pupils find it very difficult to use fraction as a measure, compare or arrange series of fractions and use the concept of fraction as a quotient.Keywords: difficulties in fraction, fraction, Kolb's model, learning styles
Procedia PDF Downloads 215271 Climate Change and Migration in the Semi-arid Tropic and Eastern Regions of India: Exploring Alternative Adaptation Strategies
Authors: Gauri Sreekumar, Sabuj Kumar Mandal
Abstract:
Contributing about 18% to India’s Gross Domestic Product, the agricultural sector plays a significant role in the Indian rural economy. Despite being the primary source of livelihood for more than half of India’s population, most of them are marginal and small farmers facing several challenges due to agro-climatic shocks. Climate change is expected to increase the risk in the regions that are highly agriculture dependent. With systematic and scientific evidence of changes in rainfall, temperature and other extreme climate events, migration started to emerge as a survival strategy for the farm households. In this backdrop, our present study aims to combine the two strands of literature and attempts to explore whether migration is the only adaptation strategy for the farmers once they experience crop failures due adverse climatic condition. Combining the temperature and rainfall information from the weather data provided by the Indian Meteorological Department with the household level panel data on Indian states belonging to the Eastern and Semi-Arid Tropics regions from the Village Dynamics in South Asia (VDSA) collected by the International Crop Research Institute for the Semi-arid Tropics, we form a rich panel data for the years 2010-2014. A Recursive Econometric Model is used to establish the three-way nexus between climate change-yield-migration while addressing the role of irrigation and local non-farm income diversification. Using Three Stage Least Squares Estimation method, we find that climate change induced yield loss is a major driver of farmers’ migration. However, irrigation and local level non-farm income diversification are found to mitigate the adverse impact of climate change on migration. Based on our empirical results, we suggest for enhancing irrigation facilities and making local non-farm income diversification opportunities available to increase farm productivity and thereby reduce farmers’ migration.Keywords: climate change, migration, adaptation, mitigation
Procedia PDF Downloads 63270 Genetic Diversity of Sugar Beet Pollinators
Authors: Ksenija Taški-Ajdukovic, Nevena Nagl, Živko Ćurčić, Dario Danojević
Abstract:
Information about genetic diversity of sugar beet parental populations is of a great importance for hybrid breeding programs. The aim of this research was to evaluate genetic diversity among and within populations and lines of diploid sugar beet pollinators, by using SSR markers. As plant material were used eight pollinators originating from three USDA-ARS breeding programs and four pollinators from Institute of Field and Vegetable Crops, Novi Sad. Depending on the presence of self-fertility gene, the pollinators were divided into three groups: autofertile (inbred lines), autosterile (open-pollinating populations), and group with partial presence of autofertility gene. A total of 40 SSR primers were screened, out of which 34 were selected for the analysis of genetic diversity. A total of 129 different alleles were obtained with mean value 3.2 alleles per SSR primer. According to the results of genetic variability assessment the number and percentage of polymorphic loci was the maximal in pollinators NS1 and tester cms2 while effective number of alleles, expected heterozygosis and Shannon’s index was highest in pollinator EL0204. Analysis of molecular variance (AMOVA) showed that 77.34% of the total genetic variation was attributed to intra-varietal variance. Correspondence analysis results were very similar to grouping by neighbor-joining algorithm. Number of groups was smaller by one, because correspondence analysis merged IFVCNS pollinators with CZ25 into one group. Pollinators FC220, FC221 and C 51 were in the next group, while self-fertile pollinators CR10 and C930-35 from USDA-Salinas were separated. On another branch were self-sterile pollinators ЕL0204 and ЕL53 from USDA-East Lansing. Sterile testers cms1 and cms2 formed separate group. The presented results confirmed that SSR analysis can be successfully used in estimation of genetic diversity within and among sugar beet populations. Since the tested pollinator differed considering the presence of self-fertility gene, their heterozygosity differed as well. It was lower in genotypes with fixed self-fertility genes. Since the most of tested populations were open-pollinated, which rarely self-pollinate, high variability within the populations was expected. Cluster analysis grouped populations according to their origin.Keywords: auto fertility, genetic diversity, pollinator, SSR, sugar beet
Procedia PDF Downloads 458269 Comparison of Rainfall Trends in the Western Ghats and Coastal Region of Karnataka, India
Authors: Vinay C. Doranalu, Amba Shetty
Abstract:
In recent days due to climate change, there is a large variation in spatial distribution of daily rainfall within a small region. Rainfall is one of the main end climatic variables which affect spatio-temporal patterns of water availability. The real task postured by the change in climate is identification, estimation and understanding the uncertainty of rainfall. This study intended to analyze the spatial variations and temporal trends of daily precipitation using high resolution (0.25º x 0.25º) gridded data of Indian Meteorological Department (IMD). For the study, 38 grid points were selected in the study area and analyzed for daily precipitation time series (113 years) over the period 1901-2013. Grid points were divided into two zones based on the elevation and situated location of grid points: Low Land (exposed to sea and low elevated area/ coastal region) and High Land (Interior from sea and high elevated area/western Ghats). Time series were applied to examine the spatial analysis and temporal trends in each grid points by non-parametric Mann-Kendall test and Theil-Sen estimator to perceive the nature of trend and magnitude of slope in trend of rainfall. Pettit-Mann-Whitney test is applied to detect the most probable change point in trends of the time period. Results have revealed remarkable monotonic trend in each grid for daily precipitation of the time series. In general, by the regional cluster analysis found that increasing precipitation trend in shoreline region and decreasing trend in Western Ghats from recent years. Spatial distribution of rainfall can be partly explained by heterogeneity in temporal trends of rainfall by change point analysis. The Mann-Kendall test shows significant variation as weaker rainfall towards the rainfall distribution over eastern parts of the Western Ghats region of Karnataka.Keywords: change point analysis, coastal region India, gridded rainfall data, non-parametric
Procedia PDF Downloads 291268 Improving Climate Awareness and the Knowledge Related to Climate Change's Health Impacts on Medical Schools
Authors: Abram Zoltan
Abstract:
Over the past hundred years, human activities, particularly the burning of fossil fuels, have released enough carbon dioxide and other greenhouse gases to dissipate additional heat into the lower atmosphere and affect the global climate. Climate change affects many social and environmental determinants of health: clean air, safe drinking water, and adequate food. Our aim is to draw attention to the effects of climate change on the health and health care system. Improving climate awareness and the knowledge related to climate change's health impacts are essential among medical students and practicing medical doctors. Therefore, in their everyday practice, they also need some assistance and up-to-date knowledge of how climate change can endanger human health and deal with these novel health problems. Our activity, based on the cooperation of more universities, aims to develop new curriculum outlines and learning materials on climate change's health impacts for medical schools. Special attention is intended to pay to the possible preventative measures against these impacts. For all of this, the project plans to create new curriculum outlines and learning materials for medical students, elaborate methodological guidelines and create training materials for medical doctors' postgraduate learning programs. The target groups of the project are medical students, educational staff of medical schools and universities, practicing medical doctors with special attention to the general practitioners and family doctors. We had searched various surveys, domestic and international studies about the effects of climate change and statistical estimation of the possible consequences. The health effects of climate change can be measured only approximately by considering only a fraction of the potential health effects and assuming continued economic growth and health progress. We can estimate that climate change is expected to cause about 250,000 more deaths. We conclude that climate change is one of the most serious problems of the 21st century, affecting all populations. In the short- to medium-term, the health effects of climate change will be determined mainly by human vulnerability. In the longer term, the effects depend increasingly on the extent to which transformational action is taken now to reduce emissions. We can contribute to reducing environmental pollution by raising awareness and by educating the population.Keywords: climate change, health impacts, medical students, education
Procedia PDF Downloads 125267 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept
Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua
Abstract:
River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel
Procedia PDF Downloads 125266 A Simplified Method to Assess the Damage of an Immersed Cylinder Subjected to Underwater Explosion
Authors: Kevin Brochard, Herve Le Sourne, Guillaume Barras
Abstract:
The design of a submarine’s hull is crucial for its operability and crew’s safety, but also complex. Indeed, engineers need to balance lightness, acoustic discretion and resistance to both immersion pressure and environmental attacks. Submarine explosions represent a first-rate threat for the integrity of the hull, whose behavior needs to be properly analyzed. The presented work is focused on the development of a simplified analytical method to study the structural response of a deeply immersed cylinder submitted to an underwater explosion. This method aims to provide engineers a quick estimation of the resulting damage, allowing them to simulate a large number of explosion scenarios. The present research relies on the so-called plastic string on plastic foundation model. A two-dimensional boundary value problem for a cylindrical shell is converted to an equivalent one-dimensional problem of a plastic string resting on a non-linear plastic foundation. For this purpose, equivalence parameters are defined and evaluated by making assumptions on the shape of the displacement and velocity field in the cross-sectional plane of the cylinder. Closed-form solutions for the deformation and velocity profile of the shell are obtained for explosive loading, and compare well with numerical and experimental results. However, the plastic-string model has not yet been adapted for a cylinder in immersion subjected to an explosive loading. In fact, the effects of fluid-structure interaction have to be taken into account. Moreover, when an underwater explosion occurs, several pressure waves are emitted by the gas bubble pulsations, called secondary waves. The corresponding loads, which may produce significant damages to the cylinder, must also be accounted for. The analytical developments carried out to solve the above problem of a shock wave impacting a cylinder, considering fluid-structure interaction will be presented for an unstiffened cylinder. The resulting deformations are compared to experimental and numerical results for different shock factors and different standoff distances.Keywords: immersed cylinder, rigid plastic material, shock loading, underwater explosion
Procedia PDF Downloads 331265 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease
Authors: Vaishali Patil, Neeraj Masand
Abstract:
In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR
Procedia PDF Downloads 252264 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters
Authors: Radyah Ivan, Khanferyan Roman
Abstract:
Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.
Procedia PDF Downloads 36263 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 388262 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 182261 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile
Authors: Fikru Fentaw Abera
Abstract:
Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE
Procedia PDF Downloads 363260 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces
Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech
Abstract:
Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate
Procedia PDF Downloads 250259 Yield Loss Estimation Using Multiple Drought Severity Indices
Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed
Abstract:
Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.Keywords: grapes, composite drought index, yield loss, satellite remote sensing
Procedia PDF Downloads 156258 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study
Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali
Abstract:
Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin
Procedia PDF Downloads 192257 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images
Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang
Abstract:
Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network
Procedia PDF Downloads 90256 Aten Years Rabies Data Exposure and Death Surveillance Data Analysis in Tigray Region, Ethiopia, 2023
Authors: Woldegerima G. Medhin, Tadele Araya
Abstract:
Background: Rabies is acute viral encephalitis affecting mainly carnivores and insectivorous but can affect any mammal. Case fatality rate is 100% once clinical signs appear. Rabies has a worldwide distribution in continental regions of Asia and Africa. Globally, rabies is responsible for more than 61000 human deaths annually. An estimation of human mortality rabies in Asia and Africa annually exceed 35172 and 21476 respectively. Ethiopia approximately 2900 people were estimated to die of rabies annually, Tigary region approximately 98 people were estimated to die annually. The aim of this study is to analyze trends, describe, and evaluate the ten years rabies data in Tigray, Ethiopia. Methods: We conducted descriptive epidemiological study from 15-30 February, 2023 of rabies exposure and death in humans by reviewing the health management information system report from Tigray Regional Health Bureau and vaccination coverage of dog population from 2013 to 2022. We used case definition, suspected cases are those bitten by the dogs displaying clinical signs consistent with rabies and confirmed cases were deaths from rabies at time of the exposure. Results: A total 21031 dog bites and 375 deaths report of rabies and 18222 post exposure treatments for humans in Tigray region were used. A suspected rabies patients had shown an increasing trend from 2013 to 2015 and 2018 to 2019. Overall mortality rate was 19/1000 in Tigray. Majority of suspected patients (45%) were age <15 years old. An estimated by Agriculture Bureau of Tigray Region about 12000 owned and 2500 stray dogs are available in the region, but yearly dog vaccination remains low (50%). Conclusion: Rabies is a public health problem in Tigray region. It is highly recommended to vaccinate individually owned dogs and concerned sectors should eliminate stray dogs. Surveillance system should strengthen for estimating the real magnitude, launch preventive and control measures.Keywords: rabies, Virus, transmision, prevalence
Procedia PDF Downloads 71255 Toxicity of PPCPs on Adapted Sludge Community
Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton
Abstract:
Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.Keywords: adapted sludge community, mixture, PPCPs, toxicity
Procedia PDF Downloads 398254 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 117253 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City
Authors: Sultan Ahmad Azizi, Gaurang J. Joshi
Abstract:
Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport
Procedia PDF Downloads 260252 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 263251 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 102250 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea
Authors: Kyomin Lee, Joohee Kim, Sangho Kang
Abstract:
The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste
Procedia PDF Downloads 208