Search results for: adaptive estimation
328 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings
Authors: Abdulwakeel B. Raji
Abstract:
This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence
Procedia PDF Downloads 135327 Toxicity of PPCPs on Adapted Sludge Community
Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton
Abstract:
Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.Keywords: adapted sludge community, mixture, PPCPs, toxicity
Procedia PDF Downloads 399326 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 118325 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City
Authors: Sultan Ahmad Azizi, Gaurang J. Joshi
Abstract:
Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport
Procedia PDF Downloads 260324 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 103323 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea
Authors: Kyomin Lee, Joohee Kim, Sangho Kang
Abstract:
The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste
Procedia PDF Downloads 209322 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”
Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy
Abstract:
Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared togetherKeywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network
Procedia PDF Downloads 446321 Challenges in Environmental Governance: A Case Study of Risk Perceptions of Environmental Agencies Involved in Flood Management in the Hawkesbury-Nepean Region, Australia
Authors: S. Masud, J. Merson, D. F. Robinson
Abstract:
The management of environmental resources requires engagement of a range of stakeholders including public/private agencies and different community groups to implement sustainable conservation practices. The challenge which is often ignored is the analysis of agencies involved and their power relations. One of the barriers identified is the difference in risk perceptions among the agencies involved that leads to disjointed efforts of assessing and managing risks. Wood et al 2012, explains that it is important to have an integrated approach to risk management where decision makers address stakeholder perspectives. This is critical for an effective risk management policy. This abstract is part of a PhD research that looks into barriers to flood management under a changing climate and intends to identify bottlenecks that create maladaptation. Experiences are drawn from international practices in the UK and examined in the context of Australia through exploring the flood governance in a highly flood-prone region in Australia: the Hawkesbury Ne-pean catchment as a case study. In this research study several aspects of governance and management are explored: (i) the complexities created by the way different agencies are involved in assessing flood risks (ii) different perceptions on acceptable flood risk level; (iii) perceptions on community engagement in defining acceptable flood risk level; (iv) Views on a holistic flood risk management approach; and, (v) challenges of centralised information system. The study concludes that the complexity of managing a large catchment is exacerbated by the difference in the way professionals perceive the problem. This has led to: (a) different standards for acceptable risks; (b) inconsistent attempt to set-up a regional scale flood management plan beyond the jurisdictional boundaries: (c) absence of a regional scale agency with license to share and update information (d) Lack of forums for dialogue with insurance companies to ensure an integrated approach to flood management. The research takes the Hawkesbury-Nepean catchment as case example and draws from literary evidence from around the world. In addition, conclusions were extrapolated from eighteen semi-structured interviews from agencies involved in flood risk management in the Hawkesbury-Nepean catchment of NSW, Australia. The outcome of this research is to provide a better understanding of complexity in assessing risks against a rapidly changing climate and contribute towards developing effective risk communication strategies thus enabling better management of floods and achieving increased level of support from insurance companies, real-estate agencies, state and regional risk managers and the affected communities.Keywords: adaptive governance, flood management, flood risk communication, stakeholder risk perceptions
Procedia PDF Downloads 286320 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 49319 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity
Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar
Abstract:
Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics
Procedia PDF Downloads 394318 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi
Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur
Abstract:
Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi
Procedia PDF Downloads 245317 Extreme Heat and Workforce Health in Southern Nevada
Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria
Abstract:
Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.Keywords: heat, health, hazards, workforce
Procedia PDF Downloads 104316 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects
Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi
Abstract:
Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC
Procedia PDF Downloads 137315 Antioxidant Status in Synovial Fluid from Osteoarthritis Patients: A Pilot Study in Indian Demography
Authors: S. Koppikar, P. Kulkarni, D. Ingale , N. Wagh, S. Deshpande, A. Mahajan, A. Harsulkar
Abstract:
Crucial role of reactive oxygen species (ROS) in the progression Osteoarthritis (OA) pathogenesis has been endorsed several times though its exact mechanism remains unclear. Oxidative stress is known to instigate classical stress factors such as cytokines, chemokines and ROS, which hampers cartilage remodelling process and ultimately results in worsening the disease. Synovial fluid (SF) is a biological communicator between cartilage and synovium that accumulates redox and biochemical signalling mediators. The present work attempts to measure several oxidative stress markers in the synovial fluid obtained from knee OA patients with varying degree of disease severity. Thirty OA and five Meniscal-tear (MT) patients were graded using Kellgren-Lawrence scale and assessed for Nitric oxide (NO), Nitrate-Nitrite (NN), 2,2-diphenyl-1-picrylhydrazyl (DPPH), Ferric Reducing Antioxidant Potential (FRAP), Catalase (CAT), Superoxide dismutase (SOD) and Malondialdehyde (MDA) levels for comparison. Out of various oxidative markers studied, NO and SOD showed significant difference between moderate and severe OA (p= 0.007 and p= 0.08, respectively), whereas CAT demonstrated significant difference between MT and mild group (p= 0.07). Interestingly, NN revealed statistically positive correlation with OA severity (p= 0.001 and p= 0.003). MDA, a lipid peroxidation by-product was estimated maximum in early OA when compared to MT (p= 0.06). However, FRAP did not show any correlation with OA severity or MT control. NO is an essential bio-regulatory molecule essential for several physiological processes, and inflammatory conditions. However, due to its short life, exact estimation of NO becomes difficult. NO and its measurable stable products are still it is considered as one of the important biomarker of oxidative damage. Levels of NO and nitrite-nitrate in SF of patients with OA indicated its involvement in the disease progression. When SF groups were compared, a significant correlation among moderate, mild and MT groups was established. To summarize, present data illustrated higher levels of NO, SOD, CAT, DPPH and MDA in early OA in comparison with MT, as a control group. NN had emerged as a prognostic bio marker in knee OA patients, which may act as futuristic targets in OA treatment.Keywords: antioxidant, knee osteoarthritis, oxidative stress, synovial fluid
Procedia PDF Downloads 477314 Green Procedure for Energy and Emission Balancing of Alternative Scenario Improvements for Cogeneration System: A Case of Hardwood Lumber Manufacturing Process
Authors: Aldona Kluczek
Abstract:
Energy efficient process have become a pressing research field in manufacturing. The arguments for having an effective industrial energy efficiency processes are interacted with factors: economic and environmental impact, and energy security. Improvements in energy efficiency are most often achieved by implementation of more efficient technology or manufacturing process. Current processes of electricity production represents the biggest consumption of energy and the greatest amount of emissions to the environment. The goal of this study is to improve the potential energy-savings and reduce greenhouse emissions related to improvement scenarios for the treatment of hardwood lumber produced by an industrial plant operating in the U.S. through the application of green balancing procedure, in order to find the preferable efficient technology. The green procedure for energy is based on analysis of energy efficiency data. Three alternative scenarios of the cogeneration systems plant (CHP) construction are considered: generation of fresh steam, the purchase of a new boiler with the operating pressure 300 pounds per square inch gauge (PSIG), an installation of a new boiler with a 600 PSIG pressure. In this paper, the application of a bottom-down modelling for energy flow to devise a streamlined Energy and Emission Flow Analyze method for the technology of producing electricity is illustrated. It will identify efficiency or technology of a given process to be reached, through the effective use of energy, or energy management. Results have shown that the third scenario seem to be the efficient alternative scenario considered from the environmental and economic concerns for treating hardwood lumber. The energy conservation evaluation options could save an estimated 6,215.78 MMBtu/yr in each year, which represents 9.5% of the total annual energy usage. The total annual potential cost savings from all recommendations is $143,523/yr, which represents 30.1% of the total annual energy costs. Estimation have presented that energy cost savings are possible up to 43% (US$ 143,337.85), representing 18.6% of the total annual energy costs.Keywords: alternative scenario improvements, cogeneration system, energy and emission flow analyze, energy balancing, green procedure, hardwood lumber manufacturing process
Procedia PDF Downloads 208313 Acute Neurophysiological Responses to Resistance Training; Evidence of a Shortened Super Compensation Cycle and Early Neural Adaptations
Authors: Christopher Latella, Ashlee M. Hendy, Dan Vander Westhuizen, Wei-Peng Teo
Abstract:
Introduction: Neural adaptations following resistance training interventions have been widely investigated, however the evidence regarding the mechanisms of early adaptation are less clear. Understanding neural responses from an acute resistance training session is pivotal in the prescription of frequency, intensity and volume in applied strength and conditioning practice. Therefore the primary aim of this study was to investigate the time course of neurophysiological mechanisms post training against current super compensation theory, and secondly, to examine whether these responses reflect neural adaptations observed with resistance training interventions. Methods: Participants (N=14) completed a randomised, counterbalanced crossover study comparing; control, strength and hypertrophy conditions. The strength condition involved 3 x 5RM leg extensions with 3min recovery, while the hypertrophy condition involved 3 x 12 RM with 60s recovery. Transcranial magnetic stimulation (TMS) and peripheral nerve stimulation were used to measure excitability of the central and peripheral neural pathways, and maximal voluntary contraction (MVC) to quantify strength changes. Measures were taken pre, immediately post, 10, 20 and 30 mins and 1, 2, 6, 24, 48, 72 and 96 hrs following training. Results: Significant decreases were observed at post, 10, 20, 30 min, 1 and 2 hrs for both training groups compared to control group for force, (p <.05), maximal compound wave; (p < .005), silent period; (p < .05). A significant increase in corticospinal excitability; (p < .005) was observed for both groups. Corticospinal excitability between strength and hypertrophy groups was near significance, with a large effect (η2= .202). All measures returned to baseline within 6 hrs post training. Discussion: Neurophysiological mechanisms appear to be significantly altered in the period 2 hrs post training, returning to homeostasis by 6 hrs. The evidence suggests that the time course of neural recovery post resistance training occurs 18-40 hours shorter than previous super compensation models. Strength and hypertrophy protocols showed similar response profiles with current findings suggesting greater post training corticospinal drive from hypertrophy training, despite previous evidence that strength training requires greater neural input. The increase in corticospinal drive and decrease inl inhibition appear to be a compensatory mechanism for decreases in peripheral nerve excitability and maximal voluntary force output. The changes in corticospinal excitability and inhibition are akin to adaptive processes observed with training interventions of 4 wks or longer. It appears that the 2 hr recovery period post training is the most influential for priming further neural adaptations with resistance training. Secondly, the frequency of prescribed resistance sessions can be scheduled closer than previous super compensation theory for optimal strength gains.Keywords: neural responses, resistance training, super compensation, transcranial magnetic stimulation
Procedia PDF Downloads 283312 RAD-Seq Data Reveals Evidence of Local Adaptation between Upstream and Downstream Populations of Australian Glass Shrimp
Authors: Sharmeen Rahman, Daniel Schmidt, Jane Hughes
Abstract:
Paratya australiensis Kemp (Decapoda: Atyidae) is a widely distributed indigenous freshwater shrimp, highly abundant in eastern Australia. This species has been considered as a model stream organism to study genetics, dispersal, biology, behaviour and evolution in Atyids. Paratya has a filter feeding and scavenging habit which plays a significant role in the formation of lotic community structure. It has been shown to reduce periphyton and sediment from hard substrates of coastal streams and hence acts as a strongly-interacting ecosystem macroconsumer. Besides, Paratya is one of the major food sources for stream dwelling fishes. Paratya australiensis is a cryptic species complex consisting of 9 highly divergent mitochondrial DNA lineages. Among them, one lineage has been observed to favour upstream sites at higher altitudes, with cooler water temperatures. This study aims to identify local adaptation in upstream and downstream populations of this lineage in three streams in the Conondale Range, North-eastern Brisbane, Queensland, Australia. Two populations (up and down stream) from each stream have been chosen to test for local adaptation, and a parallel pattern of adaptation is expected across all streams. Six populations each consisting of 24 individuals were sequenced using the Restriction Site Associated DNA-seq (RAD-seq) technique. Genetic markers (SNPs) were developed using double digest RAD sequencing (ddRAD-seq). These were used for de novo assembly of Paratya genome. De novo assembly was done using the STACKs program and produced 56, 344 loci for 47 individuals from one stream. Among these individuals, 39 individuals shared 5819 loci, and these markers are being used to test for local adaptation using Fst outlier tests (Arlequin) and Bayesian analysis (BayeScan) between up and downstream populations. Fst outlier test detected 27 loci likely to be under selection and the Bayesian analysis also detected 27 loci as under selection. Among these 27 loci, 3 loci showed evidence of selection at a significance level using BayeScan program. On the other hand, up and downstream populations are strongly diverged at neutral loci with a Fst =0.37. Similar analysis will be done with all six populations to determine if there is a parallel pattern of adaptation across all streams. Furthermore, multi-locus among population covariance analysis will be done to identify potential markers under selection as well as to compare single locus versus multi-locus approaches for detecting local adaptation. Adaptive genes identified in this study can be used for future studies to design primers and test for adaptation in related crustacean species.Keywords: Paratya australiensis, rainforest streams, selection, single nucleotide polymorphism (SNPs)
Procedia PDF Downloads 255311 Emergence of Neurodiversity and Awareness of Autism Among School Teachers- A Preliminary Survey
Authors: Tanvi Rajesh Sanghavi
Abstract:
Introduction: Neurodiversity is a concept which captures the different ways in which everyone's brain functions and is considered as part of normal variation. It is a strength-based approach which focuses on the individual's strengths and capabilities and believes in providing support wherever necessary. In many parts of the world, those diagnosed with autism spectrum disorder have been ostracized and ridiculed due to their sensory and communication differences. Hence, it becomes important for the teachers to have knowledge about autism and understand the needs of children with Autism. Need: India is rich in terms of culture, languages and religious diversity. It is important to study neurodiversity in such a population for better understanding of neurodiverse individuals and appropriate intervention. Aim & objectives: This study seeks teachers' knowledge of the causes, traits and educational requirements of children with autism spectrum disorder (ASD). It also aims to find out whether mainstream schools actually provide training programs to the teachers to manage such children along with the necessary accommodations. Method: The current study was a cross-sectional study conducted among school teachers. A total of 30 school teachers were taken for the study. The participants were enrolled after informed consent. The participants were directed to a google form consisting of objective questions. The first part of the questionnaire elicited information about school, teaching experience, qualification, etc. There were specific questions extracting details on attending/conducting sensitization and professional programs in regard to care for autistic children. The second part of the questionnaire consisted of some basic questions on the teacher’s understanding of diagnosis, traits, causes, road to recovery and understanding the educational and communication needs of autistic children from the teacher’s perspective. The responses were tabulated and analyzed descriptively. Results: Most of the teachers had 5–10 years of teaching experience. The majority of the teachers used the term “special child” for autistic children. Around 54.8% (17 teachers) of the total teachers felt that the parents of autistic children should teach their child to learn adaptive skills and 41.9% of the teachers felt that they should take medical intervention. About 50% of the teachers felt that the cause of autism is related to pre-natal maternal factors and about 40% felt that its cause is genetic. Only a small percentage of teachers felt that they were trained to manage the children with autism. More than 50% of the teachers mentioned that their schools do not conduct training programs for managing these children. Discussion & Conclusion: In this study, the knowledge and perspectives of teachers on children with ASD were studied. The most widely held contemporary belief is that genetic factors play a major part in the development of ASD, although the existing evidence is muddled, with numerous opposing perspectives on the nature of this mechanism. It is worth noting that any culture's level of humanity is mirrored in how that society "treats" its vulnerable population.Keywords: autism, neurodiversity, awareness, education
Procedia PDF Downloads 17310 Counting Fishes in Aquaculture Ponds: Application of Imaging Sonars
Authors: Juan C. Gutierrez-Estrada, Inmaculada Pulido-Calvo, Ignacio De La Rosa, Antonio Peregrin, Fernando Gomez-Bravo, Samuel Lopez-Dominguez, Alejandro Garrocho-Cruz, Jairo Castro-Gutierrez
Abstract:
The semi-intensive aquaculture in traditional earth ponds is the main rearing system in Southern Spain. These fish rearing systems are approximately two thirds of aquatic production in this area which has made a significant contribution to the regional economy in recent years. In this type of rearing system, a crucial aspect is the correct quantification and control of the fish abundance in the ponds because the fish farmer knows how many fishes he puts in the ponds but doesn’t know how many fishes will harvest at the end of the rear period. This is a consequence of the mortality induced by different causes as pathogen agents as parasites, viruses and bacteria and other factors as predation of fish-eating birds and poaching. Track the fish abundance in these installations is very difficult because usually the ponds take up a large area of land and the management of the water flow is not automatized. Therefore, there is a very high degree of uncertainty on the abundance fishes which strongly hinders the management and planning of the sales. A novel and non-invasive procedure to count fishes in the ponds is by the means of imaging sonars, particularly fixed systems and/or linked to aquatic vehicles as Remotely Operated Vehicles (ROVs). In this work, a method based on census stations procedures is proposed to evaluate the fish abundance estimation accuracy using images obtained of multibeam sonars. The results indicate that it is possible to obtain a realistic approach about the number of fishes, sizes and therefore the biomass contained in the ponds. This research is included in the framework of the KTTSeaDrones Project (‘Conocimiento y transferencia de tecnología sobre vehículos aéreos y acuáticos para el desarrollo transfronterizo de ciencias marinas y pesqueras 0622-KTTSEADRONES-5-E’) financed by the European Regional Development Fund (ERDF) through the Interreg V-A Spain-Portugal Programme (POCTEP) 2014-2020.Keywords: census station procedure, fish biomass, semi-intensive aquaculture, multibeam sonars
Procedia PDF Downloads 229309 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer
Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo
Abstract:
Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer
Procedia PDF Downloads 208308 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure
Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin
Abstract:
Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.Keywords: potassium, sequential extraction process, clay mineral, TXM
Procedia PDF Downloads 290307 Acoustic Emission Monitoring of Surface Roughness in Ultra High Precision Grinding of Borosilicate-Crown Glass
Authors: Goodness Onwuka, Khaled Abou-El-Hossein
Abstract:
The increase in the demand for precision optics, coupled with the absence of much research output in the ultra high precision grinding of precision optics as compared to the ultrahigh precision diamond turning of optical metals has fostered the need for more research in the ultra high precision grinding of an optical lens. Furthermore, the increase in the stringent demands for nanometric surface finishes through lapping, polishing and grinding processes necessary for the use of borosilicate-crown glass in the automotive and optics industries has created the demand to effectively monitor the surface roughness during the production process. Acoustic emission phenomenon has been proven as useful monitoring technique in several manufacturing processes ranging from monitoring of bearing production to tool wear estimation. This paper introduces a rare and unique approach with the application of acoustic emission technique to monitor the surface roughness of borosilicate-crown glass during an ultra high precision grinding process. This research was carried out on a 4-axes Nanoform 250 ultrahigh precision lathe machine using an ultra high precision grinding spindle to machine the flat surface of the borosilicate-crown glass with the tip of the grinding wheel. A careful selection of parameters and design of experiment was implemented using Box-Behnken method to vary the wheel speed, feed rate and depth of cut at three levels with a 3-center point design. Furthermore, the average surface roughness was measured using Taylor Hobson PGI Dimension XL optical profilometer, and an acoustic emission data acquisition device from National Instruments was utilized to acquire the signals while the data acquisition codes were designed with National Instrument LabVIEW software for acquisition at a sampling rate of 2 million samples per second. The results show that the raw and root mean square amplitude values of the acoustic signals increased with a corresponding increase in the measured average surface roughness values for the different parameter combinations. Therefore, this research concludes that acoustic emission monitoring technique is a potential technique for monitoring the surface roughness in the ultra high precision grinding of borosilicate-crown glass.Keywords: acoustic emission, borosilicate-crown glass, surface roughness, ultra high precision grinding
Procedia PDF Downloads 291306 Evaluation of Monoterpenes Induction in Ugni molinae Ecotypes Subjected to a Red Grape Caterpillar (Lepidoptera: Arctiidae) Herbivory
Authors: Manuel Chacon-Fuentes, Leonardo Bardehle, Marcelo Lizama, Claudio Reyes, Andres Quiroz
Abstract:
The insect-plant interaction is a complex process in which the plant is able to release chemical signaling that modifies the behavior of insects. Insect herbivory can trigger mechanisms that allow the increase in the production of secondary metabolites that allow coping against the herbivores. Monoterpenes are a kind of secondary metabolites involved in direct defense acting as repellents of herbivorous or even in indirect defense acting as attractants for insect predators. In addition, an increase of the monoterpenes concentration is an effect commonly associated with the herbivory. Hence, plants subjected to damage by herbivory increase the monoterpenes production in comparison to plants without herbivory. In this framework, co-evolutionary aspects play a fundamental role in the adaptation of the herbivorous to their host and in the counter-adaptive strategies of the plants to avoid the herbivorous. In this context, Ugni molinae 'murtilla' is a native shrub from Chile characterized by its antioxidant activity mainly related to the phenolic compounds presents in its fruits. The larval stage of the red grape caterpillar Chilesia rudis Butler (Lepidoptera: Arctiidae) has been reported as an important defoliator of U. molinae. This insect is native from Chile and probably has been involved in a co-evolutionary process with murtilla. Therefore, we hypothesized that herbivory by the red grape caterpillar increases the emission of monoterpenes in Ugni molinae. Ecotypes 19-1 and 22-1 of murtilla were established and maintained at 25° C in the Laboratorio de Química Ecológica at Universidad de La Frontera. Red grape caterpillars of ⁓40 mm were collected near to Temuco (Chile) from grasses, and they were deprived of food for 24 h before performing the assays. Ten caterpillars were placed on the foliage of the ecotypes 19-1 and 22-1 and allowed to feed during 48 h. After this time, caterpillars were removed from the ecotypes and monoterpenes were collected. A glass chamber was used to enclose the ecotypes and a Porapak-Q column was used to trap the monoterpenes. After 24 h of capturing, columns were desorbed with hexane. Then, samples were injected in a gas chromatograph coupled to mass spectrometer and monoterpenes were determined according to the NIST library. All the experiments were performed in triplicate. Results showed that α-pinene, β-phellandrene, limonene, and 1,8 cineole were the main monoterpenes released by murtilla ecotypes. For the ecotype 19-1, the abundance of α-pinene was significantly higher in plants subjected to herbivory (100%) in relation to control plants (54.58%). Moreover, β-phellandrene and 1,8 cineole were observed only in control plants. For ecotype 22-1, there was no significant difference in monoterpenes abundance. In conclusion, the results suggest a trade-off of β-phellandrene and 1,8 cineole in response to herbivory damage by red grape caterpillar generating an increase in α-pinene abundance.Keywords: Chilesia rudis, gas chromatography, monoterpenes, Ugni molinae
Procedia PDF Downloads 152305 Estimation of Hydrogen Production from PWR Spent Fuel Due to Alpha Radiolysis
Authors: Sivakumar Kottapalli, Abdesselam Abdelouas, Christoph Hartnack
Abstract:
Spent nuclear fuel generates a mixed field of ionizing radiation to the water. This radiation field is generally dominated by gamma rays and a limited flux of fast neutrons. The fuel cladding effectively attenuates beta and alpha particle radiation. Small fraction of the spent nuclear fuel exhibits some degree of fuel cladding penetration due to pitting corrosion and mechanical failure. Breaches in the fuel cladding allow the exposure of small volumes of water in the cask to alpha and beta ionizing radiation. The safety of the transport of radioactive material is assured by the package complying with the IAEA Requirements for the Safe Transport of Radioactive Material SSR-6. It is of high interest to avoid generation of hydrogen inside the cavity which may to an explosive mixture. The risk of hydrogen production along with other radiation gases should be analyzed for a typical spent fuel for safety issues. This work aims to perform a realistic study of the production of hydrogen by radiolysis assuming most penalizing initial conditions. It consists in the calculation of the radionuclide inventory of a pellet taking into account the burn up and decays. Westinghouse 17X17 PWR fuel has been chosen and data has been analyzed for different sets of enrichment, burnup, cycles of irradiation and storage conditions. The inventory is calculated as the entry point for the simulation studies of hydrogen production by radiolysis kinetic models by MAKSIMA-CHEMIST. Dose rates decrease strongly within ~45 μm from the fuel surface towards the solution(water) in case of alpha radiation, while the dose rate decrease is lower in case of beta and even slower in case of gamma radiation. Calculations are carried out to obtain spectra as a function of time. Radiation dose rate profiles are taken as the input data for the iterative calculations. Hydrogen yield has been found to be around 0.02 mol/L. Calculations have been performed for a realistic scenario considering a capsule containing the spent fuel rod. Thus, hydrogen yield has been debated. Experiments are under progress to validate the hydrogen production rate using cyclotron at > 5MeV (at ARRONAX, Nantes).Keywords: radiolysis, spent fuel, hydrogen, cyclotron
Procedia PDF Downloads 521304 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 293303 Life at the Fence: Lived Experiences of Navigating Cultural and Social Complexities among South Sudanese Refugees in Australia
Authors: Sabitra Kaphle, Rebecca Fanany, Jenny Kelly
Abstract:
Australia welcomes significant numbers of humanitarian arrivals every year with the commitment to provide equal opportunities and the resources required for integration into the new society. Over the last two decades, more than 24,000 South Sudanese people have come to call Australia home. Most of these refugees experienced several challenges whilesettlinginto the new social structures and service systems in Australia. The aim of the research is to explore the factors influencing social and cultural integration of South Sudanese refugees who have settled in Australia. Methodology: This studyused a phenomenological approach based on in-depth interviews designed to elicit the lived experiences of South Sudanese refugees settled in Australia. It applied the principles of narrative ethnography, allowing participants an opportunity to speak about themselves and their experiences of social and cultural integration-using their own words. Twenty-six participants were recruited to the study. Participants were long-term residents (over 10 years of settlement experience)who self-identified as refugees from South Sudan. Participants were given an opportunity to speak in the language of their choice, and interviews were conducted by a bilingual interviewer in their preferred language, time, and location. Interviews were recorded and transcribed verbatim and translated to Englishfor thematic analysis. Findings: Participants’ experiences portray the complexities of integrating into a new society due tothe daily challenges that South Sudaneserefugees face. Themes emerged from narrativesindicated that South Sudanese refugees express a high level of association with a Sudanese identity while demonstrating a significant level of integration into the Australian society. Despite this identity dilemma, these refugees show a high level of consensus about the experiencesof living in Australia that is closely associated with a group identity. In the process of maintaining identity andsocial affiliation, there are significant inter-generational cultural conflicts that participants experience in adapting to Australian society. It has been elucidated that identityconflict often emerges centeringon what constitutes authentic cultural practice as well as who is entitled to claim to be a member of the South Sudanese culture. Conclusions: Results of this study suggest that the cultural identity and social affiliations of South Sudanese refugees settling into Australian society are complex and multifaceted. While there are positive elements of theirintegration into the new society, inter-generational conflictsand identity confusion require further investigation to understand the context that will assist refugees to integrate more successfully into their new society. Given the length of stay of these refugees in Australia, government and settlement agencies may benefit from developing appropriate resources and process that are adaptive to the social and cultural context in which newly arrived refugees will live.Keywords: cultural integration, inter-generational conflict, lived experiences, refugees, South sudanese
Procedia PDF Downloads 115302 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 70301 Productivity and Household Welfare Impact of Technology Adoption: A Microeconometric Analysis
Authors: Tigist Mekonnen Melesse
Abstract:
Since rural households are basically entitled to food through own production, improving productivity might lead to enhance the welfare of rural population through higher food availability at the household level and lowering the price of agricultural products. Increasing agricultural productivity through the use of improved technology is one of the desired outcomes from sensible food security and agricultural policy. The ultimate objective of this study was to evaluate the potential impact of improved agricultural technology adoption on smallholders’ crop productivity and welfare. The study is conducted in Ethiopia covering 1500 rural households drawn from four regions and 15 rural villages based on data collected by Ethiopian Rural Household Survey. Endogenous treatment effect model is employed in order to account for the selection bias on adoption decision that is expected from the self-selection of households in technology adoption. The treatment indicator, technology adoption is a binary variable indicating whether the household used improved seeds and chemical fertilizer or not. The outcome variables were cereal crop productivity, measured in real value of production and welfare of households, measured in real per capita consumption expenditure. Results of the analysis indicate that there is positive and significant effect of improved technology use on rural households’ crop productivity and welfare in Ethiopia. Adoption of improved seeds and chemical fertilizer alone will increase the crop productivity by 7.38 and 6.32 percent per year of each. Adoption of such technologies is also found to improve households’ welfare by 1.17 and 0.25 percent per month of each. The combined effect of both technologies when adopted jointly is increasing crop productivity by 5.82 percent and improving welfare by 0.42 percent. Besides, educational level of household head, farm size, labor use, participation in extension program, expenditure for input and number of oxen positively affect crop productivity and household welfare, while large household size negatively affect welfare of households. In our estimation, the average treatment effect of technology adoption (average treatment effect on the treated, ATET) is the same as the average treatment effect (ATE). This implies that the average predicted outcome for the treatment group is similar to the average predicted outcome for the whole population.Keywords: Endogenous treatment effect, technologies, productivity, welfare, Ethiopia
Procedia PDF Downloads 655300 Study on Varying Solar Blocking Depths in the Exploration of Energy-Saving Renovation of the Energy-Saving Design of the External Shell of Existing Buildings: Using Townhouse Residences in Kaohsiung City as an Example
Authors: Kuang Sheng Liu, Yu Lin Shih*, Chun Ta Tzeng, Cheng Chen Chen
Abstract:
Buildings in the 21st century are facing issues such as an extreme climate and low-carbon/energy-saving requirements. Many countries in the world are of the opinion that a building during its medium- and long-term life cycle is an energy-consuming entity. As for the use of architectural resources, including the United Nations-implemented "Global Green Policy" and "Sustainable building and construction initiative", all are working towards "zero-energy building" and "zero-carbon building" policies. Because of this, countries are cooperating with industry development using policies such as "mandatory design criteria", "green procurement policy" and "incentive grants and rebates programme". The results of this study can provide a reference for sustainable building renovation design criteria. Aimed at townhouses in Kaohsiung City, this study uses different levels of solar blocking depth to carry out evaluation of design and energy-saving renovation of the outer shell of existing buildings by using data collection and the selection of representative cases. Using building resources from a building information model (BIM), simulation and efficiency evaluation are carried out and proven with simulation estimation. This leads into the ECO-efficiency model (EEM) for the life cycle cost efficiency (LCCE) evalution. The buildings selected by this research sit in a north-south direction set with different solar blocking depths. The indoor air-conditioning consumption rates are compared. The current balcony depth of 1 metre as the simulated EUI value acts as a reference value of 100%. The solar blocking of the balcony is increased to 1.5, 2, 2.5 and 3 metres for a total of 5 different solar-blocking balcony depths, for comparison of the air-conditioning improvement efficacy. This research uses different solar-blocking balcony depths to carry out air-conditioning efficiency analysis. 1.5m saves 3.08%, 2m saves 6.74%, 2.5m saves 9.80% and 3m saves 12.72% from the air-conditioning EUI value. This shows that solar-blocking balconies have an efficiency-increasing potential for indoor air-conditioning.Keywords: building information model, eco-efficiency model, energy-saving in the external shell, solar blocking depth.
Procedia PDF Downloads 402299 A Program of Data Analysis on the Possible State of the Antibiotic Resistance in Bangladesh Environment in 2019
Authors: S. D. Kadir
Abstract:
Background: Antibiotics have always been at the centrum of the revolution of modern microbiology. Micro-organisms and its pathogenicity, resistant organisms, inappropriate or over usage of various types of antibiotic agents are fuelled multidrug-resistant pathogenic organisms. Our present time review report mainly focuses on the therapeutic condition of antibiotic resistance and the possible roots behind the development of antibiotic resistance in Bangladesh in 2019. Methodology: The systemic review has progressed through a series of research analyses on various manuscripts published on Google Scholar, PubMed, Research Gate, and collected relevant information from established popular healthcare and diagnostic center and its subdivisions all over Bangladesh. Our research analysis on the possible assurance of antibiotic resistance been ensured by the selective medical reports and on random assay on the extent of individual antibiotic in 2019. Results: 5 research articles, 50 medical report summary, and around 5 patients have been interviewed while going through the estimation process. We have prioritized research articles where the research analysis been performed by the appropriate use of the Kirby-Bauer method. Kirby-Bauer technique is preferred as it provides greater efficiency, ensures lower performance expenditure, and supplies greater convenience and simplification in the application. In most of the reviews, clinical and laboratory standards institute guidelines were strictly followed. Most of our reports indicate significant resistance shown by the Beta-lactam drugs. Specifically by the derivatives of Penicillin's, Cephalosporin's (rare use of the first generation Cephalosporin and overuse of the second and third generation of Cephalosporin and misuse of the fourth generation of Cephalosporin), which are responsible for almost 67 percent of the bacterial resistance. Moreover, approximately 20 percent of the resistance was due to the fact of drug pumping from the bacterial cell by tetracycline and sulphonamides and their derivatives. Conclusion: 90 percent of the approximate antibiotic resistance is due to the usage of relative and true broad-spectrum antibiotics. The environment has been created by the following circumstances where; the excessive usage of broad-spectrum antibiotics had led to a condition where the disruption of native bacteria and a series of anti-microbial resistance causing a disturbance of the surrounding environments in medium, leading to a state of super-infection.Keywords: antibiotics, antibiotic resistance, Kirby Bauer method, microbiology
Procedia PDF Downloads 120