Search results for: offline estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2065

Search results for: offline estimation

265 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept

Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua

Abstract:

River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.

Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel

Procedia PDF Downloads 126
264 A Simplified Method to Assess the Damage of an Immersed Cylinder Subjected to Underwater Explosion

Authors: Kevin Brochard, Herve Le Sourne, Guillaume Barras

Abstract:

The design of a submarine’s hull is crucial for its operability and crew’s safety, but also complex. Indeed, engineers need to balance lightness, acoustic discretion and resistance to both immersion pressure and environmental attacks. Submarine explosions represent a first-rate threat for the integrity of the hull, whose behavior needs to be properly analyzed. The presented work is focused on the development of a simplified analytical method to study the structural response of a deeply immersed cylinder submitted to an underwater explosion. This method aims to provide engineers a quick estimation of the resulting damage, allowing them to simulate a large number of explosion scenarios. The present research relies on the so-called plastic string on plastic foundation model. A two-dimensional boundary value problem for a cylindrical shell is converted to an equivalent one-dimensional problem of a plastic string resting on a non-linear plastic foundation. For this purpose, equivalence parameters are defined and evaluated by making assumptions on the shape of the displacement and velocity field in the cross-sectional plane of the cylinder. Closed-form solutions for the deformation and velocity profile of the shell are obtained for explosive loading, and compare well with numerical and experimental results. However, the plastic-string model has not yet been adapted for a cylinder in immersion subjected to an explosive loading. In fact, the effects of fluid-structure interaction have to be taken into account. Moreover, when an underwater explosion occurs, several pressure waves are emitted by the gas bubble pulsations, called secondary waves. The corresponding loads, which may produce significant damages to the cylinder, must also be accounted for. The analytical developments carried out to solve the above problem of a shock wave impacting a cylinder, considering fluid-structure interaction will be presented for an unstiffened cylinder. The resulting deformations are compared to experimental and numerical results for different shock factors and different standoff distances.

Keywords: immersed cylinder, rigid plastic material, shock loading, underwater explosion

Procedia PDF Downloads 337
263 Structure-Guided Optimization of Sulphonamide as Gamma–Secretase Inhibitors for the Treatment of Alzheimer’s Disease

Authors: Vaishali Patil, Neeraj Masand

Abstract:

In older people, Alzheimer’s disease (AD) is turning out to be a lethal disease. According to the amyloid hypothesis, aggregation of the amyloid β–protein (Aβ), particularly its 42-residue variant (Aβ42), plays direct role in the pathogenesis of AD. Aβ is generated through sequential cleavage of amyloid precursor protein (APP) by β–secretase (BACE) and γ–secretase (GS). Thus in the treatment of AD, γ-secretase modulators (GSMs) are potential disease-modifying as they selectively lower pathogenic Aβ42 levels by shifting the enzyme cleavage sites without inhibiting γ–secretase activity. This possibly avoids known adverse effects observed with complete inhibition of the enzyme complex. Virtual screening, via drug-like ADMET filter, QSAR and molecular docking analyses, has been utilized to identify novel γ–secretase modulators with sulphonamide nucleus. Based on QSAR analyses and docking score, some novel analogs have been synthesized. The results obtained by in silico studies have been validated by performing in vivo analysis. In the first step, behavioral assessment has been carried out using Scopolamine induced amnesia methodology. Later the same series has been evaluated for neuroprotective potential against the oxidative stress induced by Scopolamine. Biochemical estimation was performed to evaluate the changes in biochemical markers of Alzheimer’s disease such as lipid peroxidation (LPO), Glutathione reductase (GSH), and Catalase. The Scopolamine induced amnesia model has shown increased Acetylcholinesterase (AChE) levels and the inhibitory effect of test compounds in the brain AChE levels have been evaluated. In all the studies Donapezil (Dose: 50µg/kg) has been used as reference drug. The reduced AChE activity is shown by compounds 3f, 3c, and 3e. In the later stage, the most potent compounds have been evaluated for Aβ42 inhibitory profile. It can be hypothesized that this series of alkyl-aryl sulphonamides exhibit anti-AD activity by inhibition of Acetylcholinesterase (AChE) enzyme as well as inhibition of plaque formation on prolong dosage along with neuroprotection from oxidative stress.

Keywords: gamma-secretase inhibitors, Alzzheimer's disease, sulphonamides, QSAR

Procedia PDF Downloads 255
262 Association of the Frequency of the Dairy Products Consumption by Students and Health Parameters

Authors: Radyah Ivan, Khanferyan Roman

Abstract:

Milk and dairy products are an important component of a balanced diet. Dairy products represent a heterogeneous food group of solid, semi-solid and liquid, fermented or non-fermented foods, each differing in nutrients such as fat and micronutrient content. Deficiency of milk and dairy products contributes a impact on the main health parameters of the various age groups of the population. The goal of this study was to analyze of the frequency of the consumption of milk and various groups of dairy products by students and its association with their body mass index (BMI), body composition and other physiological parameters. 388 full-time students of the Medical Institute of RUDN University (185 male and 203 female, average age was 20.4+2.2 and 21.9+1.7 y.o., respectively) took part in the cross-sectional study. Anthropometric measurements, estimation of BMI and body composition were analyzed by bioelectrical impedance analysis. The frequency of consumption of the milk and various groups of dairy products was studied using a modified questionnaire on the frequency of consumption of products. Due to the questionnaire data on the frequency of consumption of the diary products, it have been demonstrated that only 11% of respondents consume milk daily, 5% - cottage cheese, 4% and 1% - fermented natural and with fillers milk products, respectively, hard cheese -4%. The study demonstrated that about 16% of the respondents did not consume milk at all over the past month, about one third - cottage cheese, 22% - natural sour-milk products and 18% - sour-milk products with various fillers. hard cheeses and pickled cheeses didn’t consume 9% and 26% of respondents, respectively. We demonstrated the gender differences in the characteristics of consumer preferences were revealed. Thus female students are less likely to use cream, sour cream, soft cheese, milk comparing to male students. Among female students the prevalence of persons with overweight was higher (25%) than among male students (19%). A modest inverse relationship was demonstrated between daily milk intake, BMI, body composition parameters and diary products consumption (r=-0.61 and r=-0.65). The study showed daily insufficient milk and dairy products consumption by students and due to this it have been demonstrated the relationship between the low and rare consumption of diary products and main parameters of indicators of physical activity and health indicators.

Keywords: frequency of consumption, milk, dairy products, physical development, nutrition, body mass index.

Procedia PDF Downloads 36
261 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 389
260 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 183
259 Assessment of Climate Change Impacts on the Hydrology of Upper Guder Catchment, Upper Blue Nile

Authors: Fikru Fentaw Abera

Abstract:

Climate changes alter regional hydrologic conditions and results in a variety of impacts on water resource systems. Such hydrologic changes will affect almost every aspect of human well-being. The goal of this paper is to assess the impact of climate change on the hydrology of Upper Guder catchment located in northwest of Ethiopia. The GCM derived scenarios (HadCM3 A2a & B2a SRES emission scenarios) experiments were used for the climate projection. The statistical downscaling model (SDSM) was used to generate future possible local meteorological variables in the study area. The down-scaled data were then used as input to the soil and water assessment tool (SWAT) model to simulate the corresponding future stream flow regime in Upper Guder catchment of the Abay River Basin. A semi distributed hydrological model, SWAT was developed and Generalized Likelihood Uncertainty Estimation (GLUE) was utilized for uncertainty analysis. GLUE is linked with SWAT in the Calibration and Uncertainty Program known as SWAT-CUP. Three benchmark periods simulated for this study were 2020s, 2050s and 2080s. The time series generated by GCM of HadCM3 A2a and B2a and Statistical Downscaling Model (SDSM) indicate a significant increasing trend in maximum and minimum temperature values and a slight increasing trend in precipitation for both A2a and B2a emission scenarios in both Gedo and Tikur Inch stations for all three bench mark periods. The hydrologic impact analysis made with the downscaled temperature and precipitation time series as input to the hydrological model SWAT suggested for both A2a and B2a emission scenarios. The model output shows that there may be an annual increase in flow volume up to 35% for both emission scenarios in three benchmark periods in the future. All seasons show an increase in flow volume for both A2a and B2a emission scenarios for all time horizons. Potential evapotranspiration in the catchment also will increase annually on average 3-15% for the 2020s and 7-25% for the 2050s and 2080s for both A2a and B2a emissions scenarios.

Keywords: climate change, Guder sub-basin, GCM, SDSM, SWAT, SWAT-CUP, GLUE

Procedia PDF Downloads 364
258 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces

Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech

Abstract:

Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.

Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate

Procedia PDF Downloads 251
257 Yield Loss Estimation Using Multiple Drought Severity Indices

Authors: Sara Tokhi Arab, Rozo Noguchi, Tofeal Ahamed

Abstract:

Drought is a natural disaster that occurs in a region due to a lack of precipitation and high temperatures over a continuous period or in a single season as a consequence of climate change. Precipitation deficits and prolonged high temperatures mostly affect the agricultural sector, water resources, socioeconomics, and the environment. Consequently, it causes agricultural product loss, food shortage, famines, migration, and natural resources degradation in a region. Agriculture is the first sector affected by drought. Therefore, it is important to develop an agricultural drought risk and loss assessment to mitigate the drought impact in the agriculture sector. In this context, the main purpose of this study was to assess yield loss using composite drought indices in the drought-affected vineyards. In this study, the CDI was developed for the years 2016 to 2020 by comprising five indices: the vegetation condition index (VCI), temperature condition index (TCI), deviation of NDVI from the long-term mean (NDVI DEV), normalized difference moisture index (NDMI) and precipitation condition index (PCI). Moreover, the quantitative principal component analysis (PCA) approach was used to assign a weight for each input parameter, and then the weights of all the indices were combined into one composite drought index. Finally, Bayesian regularized artificial neural networks (BRANNs) were used to evaluate the yield variation in each affected vineyard. The composite drought index result indicated the moderate to severe droughts were observed across the Kabul Province during 2016 and 2018. Moreover, the results showed that there was no vineyard in extreme drought conditions. Therefore, we only considered the severe and moderated condition. According to the BRANNs results R=0.87 and R=0.94 in severe drought conditions for the years of 2016 and 2018 and the R= 0.85 and R=0.91 in moderate drought conditions for the years of 2016 and 2018, respectively. In the Kabul Province within the two years drought periods, there was a significate deficit in the vineyards. According to the findings, 2018 had the highest rate of loss almost -7 ton/ha. However, in 2016 the loss rates were about – 1.2 ton/ha. This research will support stakeholders to identify drought affect vineyards and support farmers during severe drought.

Keywords: grapes, composite drought index, yield loss, satellite remote sensing

Procedia PDF Downloads 157
256 Liraglutide Augments Extra Body Weight Loss after Sleeve Gastrectomy without Change in Intrahepatic and Intra-Pancreatic Fat in Obese Individuals: Randomized, Controlled Study

Authors: Ashu Rastogi, Uttam Thakur, Jimmy Pathak, Rajesh Gupta, Anil Bhansali

Abstract:

Introduction: Liraglutide is known to induce weight loss and metabolic benefits in obese individuals. However, its effect after sleeve gastrectomy are not known. Methods: People with obesity (BMI>27.5 kg/m2) underwent LSG. Subsequently, participants were randomized to receive either 0.6mg liraglutide subcutaneously daily from 6 week post to be continued till 24 week (L-L group) or placebo (L-P group). Patients were assessed before surgery (baseline) and 6 weeks, 12weeks, 18weeks and 24weeks after surgery for height, weight, waist and hip circumference, BMI, body fat percentage, HbA1c, fasting C-peptide, fasting insulin, HOMA-IR, HOMA-β, GLP-1 levels (after standard OGTT). MRI abdomen was performed prior to surgery and at 24weeks post operatively for the estimation of intrapancreatic and intrahepatic fat content. Outcome measures: Primary outcomes were changes in metabolic variables of fasting and stimulated GLP-1 levels, insulin, c-peptide, plasma glucose levels. Secondary variables were indices of insulin resistance HOMA-IR, Matsuda index; and pancreatic and hepatic steatosis. Results: Thirty-eight patients undergoing LSG were screened and 29 participants were enrolled. Two patients withdrew consent and one patient died of acute coronary event. 26 patients were randomized and data analysed. Median BMI was 40.73±3.66 and 46.25±6.51; EBW of 49.225±11.14 and 651.48±4.85 in the L-P and L-L group, respectively. Baseline FPG was 132±51.48, 125±39.68; fasting insulin 21.5±13.99, 13.15±9.20, fasting GLP-1 2.4± .37, 2.4± .32, AUC GLP-1 340.78± 44 and 332.32 ± 44.1, HOMA-IR 7.0±4.2 and 4.42±4.5 in the L-P and L-L group, respectively. EBW loss was 47± 13.20 and 65.59± 24.20 (p<0.05) in the placebo versus liraglutide group. However, we did not observe inter-group difference in metabolic parameters between the groups in spite of significant intra-group changes after 6 months of LSG. Intra-pancreatic fat prior to surgery was 3.21±1.7 and 2.2±0.9 (p=0.38) that decreased to 2.14±1.8 and 1.06±0.8 (p=0.25) at 6 months in L-P and L-L group, respectively. Similarly, intra-pancreatic fat was 1.97±0.27 and 1.88±0.36 (p=0.361) at baseline that decreased to 1.14±0.44 and 1.36±0.47 (p=0.465) at 6 months in L-P and L-L group, respectively. Conclusion: Liraglutide augments extra body weight loss after sleeve gastrectomy. A decrease in intra-pancreatic and intra-hepatic fat is noticed after bariatric surgery without additive benefit of liraglutide administration.

Keywords: sleeve gastrectomy, liraglutide, intra-pancreatic fat, insulin

Procedia PDF Downloads 193
255 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 92
254 Aten Years Rabies Data Exposure and Death Surveillance Data Analysis in Tigray Region, Ethiopia, 2023

Authors: Woldegerima G. Medhin, Tadele Araya

Abstract:

Background: Rabies is acute viral encephalitis affecting mainly carnivores and insectivorous but can affect any mammal. Case fatality rate is 100% once clinical signs appear. Rabies has a worldwide distribution in continental regions of Asia and Africa. Globally, rabies is responsible for more than 61000 human deaths annually. An estimation of human mortality rabies in Asia and Africa annually exceed 35172 and 21476 respectively. Ethiopia approximately 2900 people were estimated to die of rabies annually, Tigary region approximately 98 people were estimated to die annually. The aim of this study is to analyze trends, describe, and evaluate the ten years rabies data in Tigray, Ethiopia. Methods: We conducted descriptive epidemiological study from 15-30 February, 2023 of rabies exposure and death in humans by reviewing the health management information system report from Tigray Regional Health Bureau and vaccination coverage of dog population from 2013 to 2022. We used case definition, suspected cases are those bitten by the dogs displaying clinical signs consistent with rabies and confirmed cases were deaths from rabies at time of the exposure. Results: A total 21031 dog bites and 375 deaths report of rabies and 18222 post exposure treatments for humans in Tigray region were used. A suspected rabies patients had shown an increasing trend from 2013 to 2015 and 2018 to 2019. Overall mortality rate was 19/1000 in Tigray. Majority of suspected patients (45%) were age <15 years old. An estimated by Agriculture Bureau of Tigray Region about 12000 owned and 2500 stray dogs are available in the region, but yearly dog vaccination remains low (50%). Conclusion: Rabies is a public health problem in Tigray region. It is highly recommended to vaccinate individually owned dogs and concerned sectors should eliminate stray dogs. Surveillance system should strengthen for estimating the real magnitude, launch preventive and control measures.

Keywords: rabies, Virus, transmision, prevalence

Procedia PDF Downloads 72
253 Toxicity of PPCPs on Adapted Sludge Community

Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton

Abstract:

Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.

Keywords: adapted sludge community, mixture, PPCPs, toxicity

Procedia PDF Downloads 399
252 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 118
251 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 260
250 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 266
249 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 103
248 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 209
247 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 446
246 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
245 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity

Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar

Abstract:

Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.

Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics

Procedia PDF Downloads 394
244 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 245
243 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 104
242 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 137
241 Antioxidant Status in Synovial Fluid from Osteoarthritis Patients: A Pilot Study in Indian Demography

Authors: S. Koppikar, P. Kulkarni, D. Ingale , N. Wagh, S. Deshpande, A. Mahajan, A. Harsulkar

Abstract:

Crucial role of reactive oxygen species (ROS) in the progression Osteoarthritis (OA) pathogenesis has been endorsed several times though its exact mechanism remains unclear. Oxidative stress is known to instigate classical stress factors such as cytokines, chemokines and ROS, which hampers cartilage remodelling process and ultimately results in worsening the disease. Synovial fluid (SF) is a biological communicator between cartilage and synovium that accumulates redox and biochemical signalling mediators. The present work attempts to measure several oxidative stress markers in the synovial fluid obtained from knee OA patients with varying degree of disease severity. Thirty OA and five Meniscal-tear (MT) patients were graded using Kellgren-Lawrence scale and assessed for Nitric oxide (NO), Nitrate-Nitrite (NN), 2,2-diphenyl-1-picrylhydrazyl (DPPH), Ferric Reducing Antioxidant Potential (FRAP), Catalase (CAT), Superoxide dismutase (SOD) and Malondialdehyde (MDA) levels for comparison. Out of various oxidative markers studied, NO and SOD showed significant difference between moderate and severe OA (p= 0.007 and p= 0.08, respectively), whereas CAT demonstrated significant difference between MT and mild group (p= 0.07). Interestingly, NN revealed statistically positive correlation with OA severity (p= 0.001 and p= 0.003). MDA, a lipid peroxidation by-product was estimated maximum in early OA when compared to MT (p= 0.06). However, FRAP did not show any correlation with OA severity or MT control. NO is an essential bio-regulatory molecule essential for several physiological processes, and inflammatory conditions. However, due to its short life, exact estimation of NO becomes difficult. NO and its measurable stable products are still it is considered as one of the important biomarker of oxidative damage. Levels of NO and nitrite-nitrate in SF of patients with OA indicated its involvement in the disease progression. When SF groups were compared, a significant correlation among moderate, mild and MT groups was established. To summarize, present data illustrated higher levels of NO, SOD, CAT, DPPH and MDA in early OA in comparison with MT, as a control group. NN had emerged as a prognostic bio marker in knee OA patients, which may act as futuristic targets in OA treatment.

Keywords: antioxidant, knee osteoarthritis, oxidative stress, synovial fluid

Procedia PDF Downloads 477
240 Green Procedure for Energy and Emission Balancing of Alternative Scenario Improvements for Cogeneration System: A Case of Hardwood Lumber Manufacturing Process

Authors: Aldona Kluczek

Abstract:

Energy efficient process have become a pressing research field in manufacturing. The arguments for having an effective industrial energy efficiency processes are interacted with factors: economic and environmental impact, and energy security. Improvements in energy efficiency are most often achieved by implementation of more efficient technology or manufacturing process. Current processes of electricity production represents the biggest consumption of energy and the greatest amount of emissions to the environment. The goal of this study is to improve the potential energy-savings and reduce greenhouse emissions related to improvement scenarios for the treatment of hardwood lumber produced by an industrial plant operating in the U.S. through the application of green balancing procedure, in order to find the preferable efficient technology. The green procedure for energy is based on analysis of energy efficiency data. Three alternative scenarios of the cogeneration systems plant (CHP) construction are considered: generation of fresh steam, the purchase of a new boiler with the operating pressure 300 pounds per square inch gauge (PSIG), an installation of a new boiler with a 600 PSIG pressure. In this paper, the application of a bottom-down modelling for energy flow to devise a streamlined Energy and Emission Flow Analyze method for the technology of producing electricity is illustrated. It will identify efficiency or technology of a given process to be reached, through the effective use of energy, or energy management. Results have shown that the third scenario seem to be the efficient alternative scenario considered from the environmental and economic concerns for treating hardwood lumber. The energy conservation evaluation options could save an estimated 6,215.78 MMBtu/yr in each year, which represents 9.5% of the total annual energy usage. The total annual potential cost savings from all recommendations is $143,523/yr, which represents 30.1% of the total annual energy costs. Estimation have presented that energy cost savings are possible up to 43% (US$ 143,337.85), representing 18.6% of the total annual energy costs.

Keywords: alternative scenario improvements, cogeneration system, energy and emission flow analyze, energy balancing, green procedure, hardwood lumber manufacturing process

Procedia PDF Downloads 208
239 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars

Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez

Abstract:

The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.

Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars

Procedia PDF Downloads 229
238 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 208
237 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure

Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin

Abstract:

Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.

Keywords: potassium, sequential extraction process, clay mineral, TXM

Procedia PDF Downloads 290
236 Acoustic Emission Monitoring of Surface Roughness in Ultra High Precision Grinding of Borosilicate-Crown Glass

Authors: Goodness Onwuka, Khaled Abou-El-Hossein

Abstract:

The increase in the demand for precision optics, coupled with the absence of much research output in the ultra high precision grinding of precision optics as compared to the ultrahigh precision diamond turning of optical metals has fostered the need for more research in the ultra high precision grinding of an optical lens. Furthermore, the increase in the stringent demands for nanometric surface finishes through lapping, polishing and grinding processes necessary for the use of borosilicate-crown glass in the automotive and optics industries has created the demand to effectively monitor the surface roughness during the production process. Acoustic emission phenomenon has been proven as useful monitoring technique in several manufacturing processes ranging from monitoring of bearing production to tool wear estimation. This paper introduces a rare and unique approach with the application of acoustic emission technique to monitor the surface roughness of borosilicate-crown glass during an ultra high precision grinding process. This research was carried out on a 4-axes Nanoform 250 ultrahigh precision lathe machine using an ultra high precision grinding spindle to machine the flat surface of the borosilicate-crown glass with the tip of the grinding wheel. A careful selection of parameters and design of experiment was implemented using Box-Behnken method to vary the wheel speed, feed rate and depth of cut at three levels with a 3-center point design. Furthermore, the average surface roughness was measured using Taylor Hobson PGI Dimension XL optical profilometer, and an acoustic emission data acquisition device from National Instruments was utilized to acquire the signals while the data acquisition codes were designed with National Instrument LabVIEW software for acquisition at a sampling rate of 2 million samples per second. The results show that the raw and root mean square amplitude values of the acoustic signals increased with a corresponding increase in the measured average surface roughness values for the different parameter combinations. Therefore, this research concludes that acoustic emission monitoring technique is a potential technique for monitoring the surface roughness in the ultra high precision grinding of borosilicate-crown glass.

Keywords: acoustic emission, borosilicate-crown glass, surface roughness, ultra high precision grinding

Procedia PDF Downloads 291