Search results for: risk evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11715

Search results for: risk evaluation

8205 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 120
8204 The Impact of Model Specification Decisions on the Teacher ValuE-added Effectiveness: Choosing the Correct Predictors

Authors: Ismail Aslantas

Abstract:

Value-Added Models (VAMs), the statistical methods for evaluating the effectiveness of teachers and schools based on student achievement growth, has attracted decision-makers’ and researchers’ attention over the last decades. As a result of this attention, many studies have conducted in recent years to discuss these statistical models from different aspects. This research focused on the importance of conceptual variables in VAM estimations; therefor, this research was undertaken to examine the extent to which value-added effectiveness estimates for teachers can be affected by using context predictions. Using longitudinal data over three years from the international school context, value-added teacher effectiveness was estimated by ordinary least-square value-added models, and the effectiveness of the teachers was examined. The longitudinal dataset in this study consisted of three major sources: students’ attainment scores up to three years and their characteristics, teacher background information, and school characteristics. A total of 1,027 teachers and their 35,355 students who were in eighth grade were examined for understanding the impact of model specifications on the value-added teacher effectiveness evaluation. Models were created using selection methods that adding a predictor on each step, then removing it and adding another one on a subsequent step and evaluating changes in model fit was checked by reviewing changes in R² values. Cohen’s effect size statistics were also employed in order to find out the degree of the relationship between teacher characteristics and their effectiveness. Overall, the results indicated that prior attainment score is the most powerful predictor of the current attainment score. 47.1 percent of the variation in grade 8 math score can be explained by the prior attainment score in grade 7. The research findings raise issues to be considered in VAM implementations for teacher evaluations and make suggestions to researchers and practitioners.

Keywords: model specification, teacher effectiveness, teacher performance evaluation, value-added model

Procedia PDF Downloads 127
8203 Evaluating an Educational Intervention to Reduce Pesticide Exposure Among Farmers in Nigeria

Authors: Gift Udoh, Diane S. Rohlman, Benjamin Sindt

Abstract:

BACKGROUND: There is concern regarding the widespread use of pesticides and impacts on public health. Farmers in Nigeria frequently apply pesticides, including organophosphate pesticides which are known neurotoxicants. They receive little guidance on how much to apply or information about safe handling practices. Pesticide poisoning is one of the major hazards that farmers face in Nigeria. Farmers continue to use highly neurotoxic pesticides for agricultural activities. Because farmers receive little or no information on safe handling and how much to apply, they continue to develop severe and mild illnesses caused by high exposures to pesticides. The project aimed to reduce pesticide exposure among rural farmers in Nigeria by identifying hazards associated with pesticide use and developing and pilot testing training to reduce exposures to pesticides utilizing the hierarchy of controls system. METHODS: Information on pesticide knowledge, behaviors, barriers to safety, and prevention methods was collected from farmers in Nigeria through workplace observations, questionnaires, and interviews. Pre and post-surveys were used to measure farmer’s knowledge before and after the delivery of pesticide safety training. Training topics included the benefits and risks of using pesticides, routes of exposure and health effects, pesticide label activity, use and selection of PPE, ways to prevent exposure and information on local resources. The training was evaluated among farmers and changes in knowledge, attitudes and behaviors were collected prior to and following the training. RESULTS: The training was administered to 60 farmers, a mean age of 35, with a range of farming experience (<1 year to > 50 years). There was an overall increase in knowledge after the training. In addition, farmers perceived a greater immediate risk from exposure to pesticides and their perception of their personal risk increased. For example, farmers believed that pesticide risk is greater to children than to adults, recognized that just because a pesticide is put on the market doesn’t mean it is safe, and they were more confident that they could get advice about handling pesticides. Also, there was greater awareness about behaviors that can increase their exposure (mixing pesticides with bare hands, eating food in the field, not washing hands before eating after applying pesticides, walking in fields recently sprayed, splashing pesticides on their clothes, pesticide storage). CONCLUSION: These results build on existing evidence from a 2022 article highlighting the need for pesticide safety training in Nigeria which suggested that pesticide safety educational programs should focus on community-based, grassroots-style, and involve a family-oriented approach. Educating farmers on agricultural safety while letting them share their experiences with their peers is an effective way of creating awareness on the dangers associated with handling pesticides. Also, for rural communities, especially in Nigeria, pesticide safety pieces of training may not be able to reach some locations, so intentional scouting of rural farming communities and delivering pesticide safety training will improve knowledge of pesticide hazards. There is a need for pesticide information centers to be situated in rural farming communities or agro supply stores, which gives rural farmers information.

Keywords: pesticide exposure, pesticide safety, nigeria, rural farming, pesticide education

Procedia PDF Downloads 168
8202 Application of the State of the Art of Hydraulic Models to Manage Coastal Problems, Case Study: The Egyptian Mediterranean Coast Model

Authors: Alsayed Ibrahim Diwedar, Ahmed ElKut, Mohamed Yossef

Abstract:

Coastal problems are stressing the coastal environment due to its complexity. The dynamic interaction between the sea and the land results in serious problems that threaten coastal areas worldwide, in addition to human interventions and activities. This makes the coastal environment highly vulnerable to natural processes like flooding, erosion, and the impact of human activities as pollution. Protecting and preserving this vulnerable coastal zone with its valuable ecosystems calls for addressing the coastal problems. This, in the end, will support the sustainability of the coastal communities and maintain the current and future generations. Consequently applying suitable management strategies and sustainable development that consider the unique characteristics of the coastal system is a must. The coastal management philosophy aims to solve the conflicts of interest between human development activities and this dynamic nature. Modeling emerges as a successful tool that provides support to decision-makers, engineers, and researchers for better management practices. Modeling tools proved that they are accurate and reliable in prediction. With its capability to integrate data from various sources such as bathymetric surveys, satellite images, and meteorological data, it offers the possibility for engineers and scientists to understand this complex dynamic system and get in-depth into the interaction between both the natural and human-induced factors. Enabling decision makers to make informed choices and develop effective strategies for sustainable development and risk mitigation. The application of modeling tools supports the evaluation of various scenarios by affording the possibility to simulate and forecast different coastal processes from the hydrodynamic and wave actions and the resulting flooding and erosion. The state-of-the-art application of modeling tools in coastal management allows for better understanding and predicting coastal processes, optimizing infrastructure planning and design, supporting ecosystem-based approaches, assessing climate change impacts, managing hazards, and finally facilitating stakeholder engagement. This paper emphasizes the role of hydraulic models in enhancing the management of coastal problems by discussing the diverse applications of modeling in coastal management. It highlights the modelling role in understanding complex coastal processes, and predicting outcomes. The importance of informing decision-makers with modeling results which gives technical and scientific support to achieve sustainable coastal development and protection.

Keywords: coastal problems, coastal management, hydraulic model, numerical model, physical model

Procedia PDF Downloads 16
8201 Optimal Trajectories for Highly Automated Driving

Authors: Christian Rathgeber, Franz Winkler, Xiaoyu Kang, Steffen Müller

Abstract:

In this contribution two approaches for calculating optimal trajectories for highly automated vehicles are presented and compared. The first one is based on a non-linear vehicle model, used for evaluation. The second one is based on a simplified model and can be implemented on a current ECU. In usual driving situations both approaches show very similar results.

Keywords: trajectory planning, direct method, indirect method, highly automated driving

Procedia PDF Downloads 526
8200 A Case Study Report on Acoustic Impact Assessment and Mitigation of the Hyprob Research Plant

Authors: D. Bianco, A. Sollazzo, M. Barbarino, G. Elia, A. Smoraldi, N. Favaloro

Abstract:

The activities, described in the present paper, have been conducted in the framework of the HYPROB-New Program, carried out by the Italian Aerospace Research Centre (CIRA) promoted and funded by the Italian Ministry of University and Research (MIUR) in order to improve the National background on rocket engine systems for space applications. The Program has the strategic objective to improve National system and technology capabilities in the field of liquid rocket engines (LRE) for future Space Propulsion Systems applications, with specific regard to LOX/LCH4 technology. The main purpose of the HYPROB program is to design and build a Propulsion Test Facility (HIMP) allowing test activities on Liquid Thrusters. The development of skills in liquid rocket propulsion can only pass through extensive test campaign. Following its mission, CIRA has planned the development of new testing facilities and infrastructures for space propulsion characterized by adequate sizes and instrumentation. The IMP test cell is devoted to testing articles representative of small combustion chambers, fed with oxygen and methane, both in liquid and gaseous phase. This article describes the activities that have been carried out for the evaluation of the acoustic impact, and its consequent mitigation. The impact of the simulated acoustic disturbance has been evaluated, first, using an approximated method based on experimental data by Baumann and Coney, included in “Noise and Vibration Control Engineering” edited by Vér and Beranek. This methodology, used to evaluate the free-field radiation of jet in ideal acoustical medium, analyzes in details the jet noise and assumes sources acting at the same time. It considers as principal radiation sources the jet mixing noise, caused by the turbulent mixing of jet gas and the ambient medium. Empirical models, allowing a direct calculation of the Sound Pressure Level, are commonly used for rocket noise simulation. The model named after K. Eldred is probably one of the most exploited in this area. In this paper, an improvement of the Eldred Standard model has been used for a detailed investigation of the acoustical impact of the Hyprob facility. This new formulation contains an explicit expression for the acoustic pressure of each equivalent noise source, in terms of amplitude and phase, allowing the investigation of the sources correlation effects and their propagation through wave equations. In order to enhance the evaluation of the facility acoustic impact, including an assessment of the mitigation strategies to be set in place, a more advanced simulation campaign has been conducted using both an in-house code for noise propagation and scattering, and a commercial code for industrial noise environmental impact, CadnaA. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach allowing the evaluation of the barrier mitigation effect, at the design. This approach has been compared with the analogous empirical/ray-acoustics approach, implemented within CadnaA using a customized definition of sources and directivity factor. The resulting impact evaluation study is reported here, along with the design-level barrier optimization for noise mitigation.

Keywords: acoustic impact, industrial noise, mitigation, rocket noise

Procedia PDF Downloads 138
8199 An Investigation of the Relevant Factors of Unplanned Readmission within 14 Days of Discharge in a Regional Teaching Hospital in South Taiwan

Authors: Xuan Hua Huang, Shu Fen Wu, Yi Ting Huang, Pi Yueh Lee

Abstract:

Background: In Taiwan, the Taiwan healthcare care Indicator Series regards the rate of hospital readmission as an important indicator of healthcare quality. Unplanned readmission not only effects patient’s condition but also increase healthcare utilization rate and healthcare costs. Purpose: The purpose of this study was explored the effects of adult unplanned readmission within 14 days of discharge at a regional teaching hospital in South Taiwan. Methods: The retrospectively review design was used. A total 495 participants of unplanned readmissions and 878 of non-readmissions within 14 days recruited from a regional teaching hospital in Southern Taiwan. The instruments used included the Charlson Comorbidity Index, and demographic characteristics, and disease-related variables. Statistical analyses were performed with SPSS version 22.0. The descriptive statistics were used (means, standard deviations, and percentage) and the inferential statistics were used T-test, Chi-square test and Logistic regression. Results: The unplanned readmissions within 14 days rate was 36%. The majorities were 268 males (54.1%), aged >65 were 318 (64.2%), and mean age was 68.8±14.65 years (23-98years). The mean score for the comorbidities was 3.77±2.73. The top three diagnosed of the readmission were digestive diseases (32.7%), respiratory diseases (15.2%), and genitourinary diseases (10.5%). There were significant relationships among the gender, age, marriage, comorbidity status, and discharge planning services (χ2: 3.816-16.474, p: 0.051~0.000). Logistic regression analysis showed that old age (OR = 1.012, 95% CI: 1.003, 1.021), had the multi-morbidity (OR = 0.712~4.040, 95% CI: 0.559~8.522), had been consult with discharge planning services (OR = 1.696, 95% CI: 1.105, 2.061) have a higher risk of readmission. Conclusions: This study finds that multi-morbidity was independent risk factor for unplanned readmissions at 14 days, recommended that the interventional treatment of the medical team be provided to provide integrated care for multi-morbidity to improve the patient's self-care ability and reduce the 14-day unplanned readmission rate.

Keywords: unplanned readmission, comorbidities, Charlson comorbidity index, logistic regression

Procedia PDF Downloads 144
8198 Magnitude and Factors of Risky Sexual Practice among Day Laborers in Ethiopia: A Systematic Review and Meta-Analysis, 2023

Authors: Kalkidan Worku, Eniyew Tegegne, Menichil Amsalu, Samuel Derbie Habtegiorgis

Abstract:

Introduction: Because of the seasonal nature of the work, day laborers are exposed to risky sexual practices. Since the majority of them are living far away from their birthplace and family, they engage in unplanned and multiple sexual practices. These unplanned and unprotected sexual experiences are a risk for different types of sexual-related health crises. This study aimed to assess the pooled prevalence of risky sexual practices and its determinants among day laborers in Ethiopia. Methods: Online databases, including PubMed, Google Scholar, Science Direct, African Journal of Online, Academia Edu, Semantic Scholar, and university repository sites, were searched from database inception until March 2023. PRISMA 2020 guideline was used to conduct the review. Among 851 extracted studies, ten articles were retained for the final quantitative analysis. To identify the source of heterogeneity, a sub-group analysis and I² test were performed. Publication bias was assessed by using a funnel plot and the Egger and Beg test. The pooled prevalence of risky sexual practices was calculated. Besides, the association between determinant factors and risky sexual practice was determined using a pooled odds ratio (OR) with a 95% confidence interval. Result: The pooled prevalence of risky sexual practices among day laborers was 46.00% (95% CI: 32.96, 59.03). Being single (OR: 2.49; 95% CI: 1.29 to 4.83), substance use (OR: 1.79; 95% CI: 1.40 to 2.29), alcohol intake (OR: 4.19; 95% CI: 2.19 to 8.04), watching pornographic (OR: 5.49; 95% CI: 2.99 to 10.09), discussion about SRH (OR: 4.21; 95% CI: 1.34 to 13.21), visiting night clubs (OR: 2.86 95% CI: 1.79 to 4.57) and risk perception (OR: 0.37 95% CI: 0.20 to 0.70) were the possible factors for risky sexual practice of day laborers in Ethiopia. Conclusions: A large proportion of day laborers engaged in risky sexual practices. Interventions targeting creating awareness of sexual and reproductive health for day laborers should be implemented. Continuous peer education on sexual health should be given to day laborers. Sexual and reproductive health services should be accessible in their workplaces to maximize condom utilization and to facilitate sexual health education for all day laborers.

Keywords: day laborers, sexual health, risky sexual practice, unsafe sex, multiple sexual partners

Procedia PDF Downloads 69
8197 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 227
8196 Conviviality as a Principle in Natural and Social Realms

Authors: Xiao Wen Xu

Abstract:

There exists a challenge of accommodating/integrating people at risk and those from various backgrounds in urban areas. The success of interdependence as a tool for survival largely rests on the mutually beneficial relationships amongst individuals within a given society. One approach to meeting this challenge has been written by Ivan Illich in his book, Tools for Conviviality, where he defines 'conviviality' as interactions that help individuals. With the goal of helping the community and applying conviviality as a principle to actors in both natural and social realms of Moss Park in Toronto, the proposal involves redesigning the park and buildings as a series of different health care, extended learning, employment support, armoury, and recreation facilities that integrate the exterior landscape as treatment, teaching, military, and recreation areas; in other words, the proposal links services with access to park space. While buildings are traditionally known to physically provide shelter, parks embody shelter and act as service, as people often find comfort and relief from being in nature, and Moss Park, in particular, is home to many people at risk. This landscape is not only an important space for the homeless community but also the rest of the neighborhood. The thesis proposes that the federal government rebuilds the current armoury, as it is an obsolete building while acknowledging the extensive future developments proposed by developers and its impact on public space. The neighbourhood is an underserved area, and the new design develops not just a new armoury, but also a complex of interrelated services, which are completely integrated into the park. The armoury is redesigned as an integral component of the community that not only serves as training facilities for reservists but also serves as an emergency shelter in sub-zero temperatures for the homeless community. This paper proposes a new design for Moss Park through examining how 'park buildings', interconnected buildings and parks, can foster empowering relationships that create a supportive public realm.

Keywords: conviviality, natural, social, Ivan Illich

Procedia PDF Downloads 396
8195 Methodological Approach for the Prioritization of Different Micro-Contaminants as Potential River Basin Specific Pollutants in the Upper Tisza River Watershed

Authors: Mihail Simion Beldean-Galea, Virginia Coman, Florina Copaciu, Mihaela Vlassa, Radu Mihaiescu, Adina Croitoru, Viorel Arghius, Modest Gertsiuk, Mikola Gertsiuk

Abstract:

Taking into consideration the huge number of chemicals released into environment compartments a proper environmental risk assessment is difficult to predict due to the gap of legislation and improper toxicological assessment of chemicals compounds. In Romania as well as in many other countries from Europe, the chemical status of the water body is characterized taking into consideration the Water Framework Directive (WFD) and the substances listed in Annex X. This Annex includes 45 substances from different classes of organic compounds and heavy metals for which AA-EQS and MAC-EQS have been established. For other compounds which are not included in Annex X, different methodologies to prioritize chemicals for risk assessment and monitoring has been proposed. These methodologies take into account Predicted No-Effect Concentrations (PNECs) of different classes of chemicals compounds available from existing risk assessments or from read-across models for acute toxicity to the standard test organisms such as Daphnia magna and Selenastrum capricornutum. Our work presents the monitoring results of 30 priority substances including polyaromatic hydrocarbons, pesticides, halogenated compounds, plasticizers and heavy metals and other 34 substances from different classes of pesticides and pharmaceuticals which are not included on the list of priority substances, performed in the Upper Tisza River Watershed from Romania and Ukraine. The obtained monitoring data were used for the establishment of the list of more relevant pollutants in the studied area and to establish the potential river basin specific pollutants. For this purpose, two indicators such as the Frequency of exceedance and Extent of exceedance of Predicted no-Effect Concentration (PNEC) were evaluated. These two indicators are based on maximum environmental concentrations (MECs) of priority substances and for other pollutants is use statistically based averages of obtained measured concentration compared to the lowest PNEC thresholds. From the obtained results it can be concluded that polyaromatic hydrocarbon such as Fluoranthene, Benzo[a]pyrene, Benzo[b]fluorathene, benzo[k]fluoranthene, Benzo(g.h.i)perylene, Indeno(1.2.3-cd)-pyrene, heavy metals such as Cadmium, Lead and Nickel can be considered as river basin specific pollutants, their concentration exceeding the Annual Average EQS concentration. Other compounds such as estrone, estriol, 174-β estradiol, naproxen or some antibiotics (Penicillin G, Tetracycline or Ceftazidime) should be taken into account for a long monitoring, in some cases their concentration exceeding PNEC. Acknowledgements: This work is performed in the frame of NATO SfP Programme, Project no. 984440.

Keywords: prioritization, river basin specific pollutants, Tisza River, water framework directive

Procedia PDF Downloads 300
8194 A High Amylose-Content and High-Yielding Elite Line Is Favorable to Cook 'Nanhan' (Semi-Soft Rice) for Nursing Care Food Particularly for Serving Aged Persons

Authors: M. Kamimukai, M. Bhattarai, B. B. Rana, K. Maeda, H. B. Kc, T. Kawano, M. Murai

Abstract:

Most of the aged people older than 70 have difficulty in chewing and swallowing more or less. According to magnitude of this difficulty, gruel, “nanhan” (semi-soft rice) and ordinary cooked rice are served in general, particularly in sanatoriums and homes for old people in Japan. Nanhan is the name of a cooked rice used in Japan, having softness intermediate between gruel and ordinary cooked rice, which is boiled with intermediate amount of water between those of the latter two kinds of cooked rice. In the present study, nanhan was made in the rate of 240g of water to 100g of milled rice with an electric rice cooker. Murai developed a high amylose-content and high-yielding elite line ‘Murai 79’. Sensory eating-quality test was performed for nanhan and ordinary cooked rice of Murai 79 and the standard variety ‘Hinohikari’ which is a high eating-quality variety representative in southern Japan. Panelists (6 to 14 persons) scored each cooked rice in six items viz. taste, stickiness, hardness, flavor, external appearance and overall evaluation. Grading (-3 ~ +3) in each trait was performed, regarding the value of the standard variety Hinohikari as 0. Paddy rice produced in a farmer’s field in 2013 and 2014 and in an experimental field of Kochi University in 2015 and 2016 were used for the sensory test. According to results of the sensory eating-quality test for nanhan, Murai 79 is higher in overall evaluation than Hinohikari in the four years. The former was less sticky than the latter in the four years, but the former was statistically significantly harder than the latter throughout the four years. In external appearance, the former was significantly higher than the latter in the four years. In the taste, the former was significantly higher than the latter in 2014, but significant difference was not noticed between them in the other three years. There were no significant differences throughout the four years in flavor. Regarding amylose content, Murai 79 is higher by 3.7 and 5.7% than Hinohikari in 2015 and 2016, respectively. As for protein content, Murai 79 was higher than Hinohikari in 2015, but the former was lower than the latter in 2016. Consequently, the nanhan of Murai 79 was harder and less sticky, keeping the shape of grains as compared with that of Hinohikari, which may be due to its higher amylose content. Hence, the nanhan of Murai 79 may be recognized as grains more easily in a human mouth, which could make easier the continuous performance of mastication and deglutition particularly in aged persons. Regarding ordinary cooked rice, Murai 79 was similar to or higher in both overall evaluation and external appearance as compared with Hinohikari, despite its higher hardness and lower stickiness. Additionally, Murai 79 had brown-rice yield of 1.55 times as compared with Hinohikari, suggesting that it would enable to supply inexpensive rice for making nanhan with high quality particularly for aged people in Japan.

Keywords: high-amylose content, high-yielding rice line, nanhan, nursing care food, sensory eating quality test

Procedia PDF Downloads 135
8193 Identification of Toxic Metal Deposition in Food Cycle and Its Associated Public Health Risk

Authors: Masbubul Ishtiaque Ahmed

Abstract:

Food chain contamination by heavy metals has become a critical issue in recent years because of their potential accumulation in bio systems through contaminated water, soil and irrigation water. Industrial discharge, fertilizers, contaminated irrigation water, fossil fuels, sewage sludge and municipality wastes are the major sources of heavy metal contamination in soils and subsequent uptake by crops. The main objectives of this project were to determine the levels of minerals, trace elements and heavy metals in major foods and beverages consumed by the poor and non-poor households of Dhaka city and assess the dietary risk exposure to heavy metal and trace metal contamination and potential health implications as well as recommendations for action. Heavy metals are naturally occurring elements that have a high atomic weight and a density of at least 5 times greater than that of water. Their multiple industrial, domestic, agricultural, medical and technological applications have led to their wide distribution in the environment; raising concerns over their potential effects on human health and the environment. Their toxicity depends on several factors including the dose, route of exposure, and chemical species, as well as the age, gender, genetics, and nutritional status of exposed individuals. Because of their high degree of toxicity, arsenic, cadmium, chromium, lead, and mercury rank among the priority metals that are of public health significance. These metallic elements are considered systemic toxicants that are known to induce multiple organ damage, even at lower levels of exposure. This review provides an analysis of their environmental occurrence, production and use, potential for human exposure, and molecular mechanisms of toxicity, and carcinogenicity.

Keywords: food chain, determine the levels of minerals, trace elements, heavy metals, production and use, human exposure, toxicity, carcinogenicity

Procedia PDF Downloads 279
8192 Rheological Evaluation of Wall Materials and β-Carotene Loaded Microencapsules

Authors: Gargi Ghoshal, Ashay Jain, Deepika Thakur, U. S. Shivhare, O. P. Katare

Abstract:

The main objectives of this work were the rheological characterization of dispersions, emulsions at different pH used in the microcapsules preparation and the microcapsules obtain from gum arabic (A), guar gum (G), casein (C) and whey protein isolate (W) to keep β-carotene protected from degradation using the complex coacervation microencapsulation technique (CCM). The evaluation of rheological properties of dispersions, emulsions of different pH and so obtained microencapsules manifest the changes occur in the molecular structure of wall materials during the encapsulation process of β-carotene. These dispersions, emulsions of different pH and formulated microencapsules were subjected to go through various conducted experiments (flow curve test, amplitude sweep, and frequency sweep test) using controlled stress dynamic rheometer. Flow properties were evaluated as a function of apparent viscosity under steady shear rate ranging from 0.1 to 100 s-1. The frequency sweep test was conducted to determine the extent of viscosity and elasticity present in the samples at constant strain under changing angular frequency range from 0.1 to 100 rad/s at 25ºC. The dispersions and emulsion exhibited a shear thinning non-Newtonian behavior whereas microencapsules are considered as shear-thickening respectively. The apparent viscosity for dispersion, emulsions were decreased at low shear rates 20 s-1 and for microencapsules, it decreases up to ~50 s-1 besides these value, it has shown constant pattern. Oscillatory shear experiments showed a predominant viscous liquid behavior up to crossover frequencies of dispersions of C, W, A at 49.47 rad/s, 57.60 rad/s and 21.45 rad/s emulsion sample of AW at pH 5.0 it was 17.85 rad/s and GW microencapsules 61.40 rad/s respectively whereas no such crossover was found in G dispersion, emulsion with C and microencapsules still it showed more viscous behavior. Storage and loss modulus decreases with time also a shift of the crossover towards lower frequencies for A, W and C was observed respectively. However, their microencapsules showed more viscous behavior as compared to samples prior to blending.

Keywords: viscosity, gums, proteins, frequency sweep test, apparent viscosity

Procedia PDF Downloads 245
8191 Mild Hypothermia Versus Normothermia in Patients Undergoing Cardiac Surgery: A Propensity Matched Analysis

Authors: Ramanish Ravishankar, Azar Hussain, Mahmoud Loubani, Mubarak Chaudhry

Abstract:

Background and Aims: Currently, there are no strict guidelines in cardiopulmonary bypass temperature management in cardiac surgery not involving the aortic arch. This study aims to compare patient outcomes undergoing mild hypothermia and normothermia. The aim of this study was to compare patient outcomes between mild hypothermia and normothermia undergoing on-pump cardiac surgery not involving the aortic arch. Methods: This was a retrospective cohort study from January 2015 until May 2023. Patients who underwent cardiac surgery with cardiopulmonary bypass temperatures ≥32oC were included and stratified into mild hypothermia (32oC – 35oC) and normothermia (>35oC) cohorts. Propensity matching was applied through the nearest neighbour method (1:1) using the risk factors detailed in the EuroScore using RStudio. The primary outcome was mortality. Secondary outcomes included post-op stay, intensive care unit readmission, re-admission, stroke, and renal complications. Patients who had major aortic surgery and off-pump operations were excluded. Results: Each cohort had 1675 patients. There was a significant increase in overall mortality with the mild hypothermia cohort (3.59% vs. 2.32%; p=0.04912). There was also a greater stroke incidence (2.09% vs. 1.13%; p=0.0396) and transient ischaemic attack (TIA) risk (3.1% vs. 1.49%; p=0.0027). There was no significant difference in renal complications (9.13% vs. 7.88%; p=0.2155). Conclusions: Patient’s who underwent mild hypothermia during cardiopulmonary bypass have a significantly greater mortality, stroke, and transient ischaemic attack incidence. Mild hypothermia does not appear to provide any benefit over normothermia and does not appear to provide any neuroprotective benefits. This shows different results to that of other major studies; further trials and studies need to be conducted to reach a consensus.

Keywords: cardiac surgery, therapeutic hypothermia, neuroprotection, cardiopulmonary bypass

Procedia PDF Downloads 65
8190 Assessment of Physical Characteristics of Maize (Zea Mays) Stored in Metallic Silos

Authors: B. A. Alabadan, E. S. Ajayi, C. A. Okolo

Abstract:

The storage losses recorded globally in maize (Zea mays) especially in the developing countries is worrisome. Certain degenerating changes in the physical characteristics (PC) of the grain occur due to the interaction between the stored maize and the immediate environment especially during long storage period. There has been tremendous reduction in the storage losses since the evolution of metallic silos. This study was carried out to assess the physical quality attributes of maize stored in 2500 MT and 1 MT metallic silos for a period of eight months. The PC evaluated includes percentage moisture content MC, insect damage ID, foreign matters FM, hectolitre weight HC, mould M and germinability VG. The evaluation of data obtained was done using statistical package for social sciences (SPSS 20) for windows evaluation version to determine significant levels and trend of deterioration (P < 0.05) for all the values obtained using Multiple Analysis of Variance (MANOVA) and Duncan’s multivariate test. The result shows that the PC are significant with duration of storage at (P < 0.05) except MI and FM that are significant at (P > 0.05) irrespective of the size of the metallic silos. The average mean deviation for physical properties from the control in respect to duration of storage are as follows: MC 10.0 ±0.00%, HC 72.9 ± 0.44% ID 0.29 ± 0.00%, BG 0.55±0.05%, MI 0.00 ± 0.65%, FM 0.80± 0.20%, VG 100 ± 0.03%. The variables that were found to be significant (p < 0.05) with the position of grain in the bulk are VG, MI and ID while others are insignificant at (p > 0.05). Variables were all significant (p < 0.05) with the duration of storage with (0.00) significant levels, irrespective of the size of the metallic silos, but were insignificant with the position of the grain in the bulk (p > 0.05). From the results, it can be concluded that there is a slight decrease of the following variables, with time, HC, MC, and V, probably due to weather fluctuations and grain respiration, while FM, BG, ID and M were found to increase slightly probably due to insect activity in the bigger silos and loss of moisture. The size of metallic silos has no remarkable influence on the PC of stored maize (Zea mays). Germinability was found to be better with the 1 MT silos probably due to its hermetic nature. Smaller size metallic silos are preferred for storage of seeds but bigger silos largely depend on the position of the grains in the bulk.

Keywords: maize, storage, silo, physical characteristics

Procedia PDF Downloads 297
8189 Exploring Factors Related to Unplanning Readmission of Elderly Patients in Taiwan

Authors: Hui-Yen Lee, Hsiu-Yun Wei, Guey-Jen Lin, Pi-Yueh Lee Lee

Abstract:

Background: Unplanned hospital readmissions increase healthcare costs and have been considered a marker of poor healthcare performance. The elderly face a higher risk of unplanned readmission due to elderly-specific characteristics such as deteriorating body functions and the relatively high incidence of complications after treatment of acute diseases. Purpose: The aim of this study was exploring the factors that relate to the unplanned readmission of elderly within 14 days of discharge at our hospital in southern Taiwan. Methods: We retrospectively reviewed the medical records of patients aged ≥65 years who had been re-admitted between January 2018 and December 2018.The Charlson Comorbidity score was calculated using previous used method. Related factors that affected the rate of unplanned readmission within 14 days of discharge were screened and analyzed using the chi-squared test and logistic regression analysis. Results: This study enrolled 829 subjects aged more than 65 years. The numbers of unplanned readmission patients within 14 days were 318 cases, while those did not belong to the unplanned readmission were 511 cases. In 2018, the rate of elderly patients in unplanned 14 days readmissions was 38.4%. The majority patients were females (166 cases, 52.2%), with an average age of 77.6 ± 7.90 years (65-98). The average value of Charlson Comorbidity score was 4.42±2.76. Using logistic regression analysis, we found that the gastric or peptic ulcer (OR=1.917 , P< 0.002), diabetes (OR= 0.722, P< 0.043), hemiplegia (OR= 2.292, P< 0.015), metastatic solid tumor (OR= 2.204, P< 0.025), hypertension (OR= 0.696, P< 0.044), and skin ulcer/cellulitis (OR= 2.747, P< 0.022) have significantly higher risk of 14-day readmissions. Conclusion: The results of the present study may assist the healthcare teams to understand the factors that may affect unplanned readmission in the elderly. We recommend that these teams give efficient approach in their medical practice, provide timely health education for elderly, and integrative healthcare for chronic diseases in order to reduce unplanned readmissions.

Keywords: unplanning readmission, elderly, Charlson comorbidity score, logistic regression analysis

Procedia PDF Downloads 128
8188 "Empowering Minds and Unleashing Curiosity: DIY Biotechnology for High School Students in the Age of Distance Learning"

Authors: Victor Hugo Sanchez Rodriguez

Abstract:

Amidst the challenges posed by pandemic-induced lockdowns, traditional educational models have been disrupted. To bridge the distance learning gap, our project introduces an innovative initiative focused on teaching high school students basic biotechnology techniques. We aim to empower young minds and foster curiosity by encouraging students to create their own DIY biotechnology laboratories using easily accessible materials found at home. This abstract outlines the key aspects of our project, highlighting its importance, methodology, and evaluation approach.In response to the pandemic's limitations, our project targets the delivery of biotechnology education at a distance. By engaging students in hands-on experiments, we seek to provide an enriching learning experience despite the constraints of remote learning. The DIY approach allows students to explore scientific concepts in a practical and enjoyable manner, nurturing their interest in biotechnology and molecular biology. Originally designed to assess professional-level research programs, we have adapted the URSSA to suit the context of biotechnology and molecular biology synthesis for high school students. By applying this tool before and after the experimental sessions, we aim to gauge the program's impact on students' learning experiences and skill development. Our project's significance lies not only in its novel approach to teaching biotechnology but also in its adaptability to the current global crisis. By providing students with a stimulating and interactive learning environment, we hope to inspire educators and institutions to embrace creative solutions during challenging times. Moreover, the insights gained from our evaluation will inform future efforts to enhance distance learning programs and promote accessible science education.

Keywords: DIY biotechnology, high school students, distance learning, pandemic education, undergraduate research student self-assessment (URSSA)

Procedia PDF Downloads 64
8187 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones

Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar

Abstract:

Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.

Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison

Procedia PDF Downloads 389
8186 Long-Term Results of Coronary Bifurcation Stenting with Drug Eluting Stents

Authors: Piotr Muzyk, Beata Morawiec, Mariusz Opara, Andrzej Tomasik, Brygida Przywara-Chowaniec, Wojciech Jachec, Ewa Nowalany-Kozielska, Damian Kawecki

Abstract:

Background: Coronary bifurcation is one of the most complex lesion in patients with coronary ar-tery disease. Provisional T-stenting is currently one of the recommended techniques. The aim was to assess optimal methods of treatment in the era of drug-eluting stents (DES). Methods: The regis-try consisted of data from 1916 patients treated with coronary percutaneous interventions (PCI) using either first- or second-generation DES. Patients with bifurcation lesion entered the analysis. Major adverse cardiac and cardiovascular events (MACCE) were assessed at one year of follow-up and comprised of death, acute myocardial infarction (AMI), repeated PCI (re-PCI) of target ves-sel and stroke. Results: Of 1916 registry patients, 204 patients (11%) were diagnosed with bifurcation lesion >50% and entered the analysis. The most commonly used technique was provi-sional T-stenting (141 patients, 69%). Optimization with kissing-balloons technique was performed in 45 patients (22%). In 59 patients (29%) second-generation DES was implanted, while in 112 pa-tients (55%), first-generation DES was used. In 33 patients (16%) both types of DES were used. The procedure success rate (TIMI 3 flow) was achieved in 98% of patients. In one-year follow-up, there were 39 MACCE (19%) (9 deaths, 17 AMI, 16 re-PCI and 5 strokes). Provisional T-stenting resulted in similar rate of MACCE to other techniques (16% vs. 5%, p=0.27) and similar occurrence of re-PCI (6% vs. 2%, p=0.78). The results of post-PCI kissing-balloon technique gave equal out-comes with 3% vs. 16% of MACCE in patients in whom no optimization technique was used (p=0.39). The type of implanted DES (second- vs. first-generation) had no influence on MACCE (4% vs 14%, respectively, p=0.12) and re-PCI (1.7% vs. 51% patients, respectively, p=0.28). Con-clusions: The treatment of bifurcation lesions with PCI represent high-risk procedures with high rate of MACCE. Stenting technique, optimization of PCI and the generation of implanted stent should be personalized for each case to balance risk of the procedure. In this setting, the operator experience might be the factor of better outcome, which should be further investigated.

Keywords: coronary bifurcation, drug eluting stents, long-term follow-up, percutaneous coronary interventions

Procedia PDF Downloads 200
8185 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system

Procedia PDF Downloads 295
8184 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 83
8183 Floor Response Spectra of RC Frames: Influence of the Infills on the Seismic Demand on Non-Structural Components

Authors: Gianni Blasi, Daniele Perrone, Maria Antonietta Aiello

Abstract:

The seismic vulnerability of non-structural components is nowadays recognized to be a key issue in performance-based earthquake engineering. Recent loss estimation studies, as well as the damage observed during past earthquakes, evidenced how non-structural damage represents the highest rate of economic loss in a building and can be in many cases crucial in a life-safety view during the post-earthquake emergency. The procedures developed to evaluate the seismic demand on non-structural components have been constantly improved and recent studies demonstrated how the existing formulations provided by main Standards generally ignore features which have a sensible influence on the definition of the seismic acceleration/displacements subjecting non-structural components. Since the influence of the infills on the dynamic behaviour of RC structures has already been evidenced by many authors, it is worth to be noted that the evaluation of the seismic demand on non-structural components should consider the presence of the infills as well as their mechanical properties. This study focuses on the evaluation of time-history floor acceleration in RC buildings; which is a useful mean to perform seismic vulnerability analyses of non-structural components through the well-known cascade method. Dynamic analyses are performed on an 8-storey RC frame, taking into account the presence of the infills; the influence of the elastic modulus of the panel on the results is investigated as well as the presence of openings. Floor accelerations obtained from the analyses are used to evaluate the floor response spectra, in order to define the demand on non-structural components depending on the properties of the infills. Finally, the results are compared with formulations provided by main International Standards, in order to assess the accuracy and eventually define the improvements required according to the results of the present research work.

Keywords: floor spectra, infilled RC frames, non-structural components, seismic demand

Procedia PDF Downloads 325
8182 A Case Study on Re-Assessment Study of an Earthfill Dam at Latamber, Pakistan

Authors: Afnan Ahmad, Shahid Ali, Mujahid Khan

Abstract:

This research presents the parametric study of an existing earth fill dam located at Latamber, Karak city, Pakistan. The study consists of carrying out seepage analysis, slope stability analysis, and Earthquake analysis of the dam for the existing dam geometry and do the same for modified geometry. Dams are massive as well as expensive hydraulic structure, therefore it needs proper attention. Additionally, this dam falls under zone 2B region of Pakistan, which is an earthquake-prone area and where ground accelerations range from 0.16g to 0.24g peak. So it should be deal with great care, as the failure of any dam can cause irreparable losses. Similarly, seepage as well as slope failure can also cause damages which can lead to failure of the dam. Therefore, keeping in view of the importance of dam construction and associated costs, our main focus is to carry out parametric study of newly constructed dam. GeoStudio software is used for this analysis in the study in which Seep/W is used for seepage analysis, Slope/w is used for Slope stability analysis and Quake/w is used for earthquake analysis. Based on the geometrical, hydrological and geotechnical data, Seepage and slope stability analysis of different proposed geometries of the dam are carried out along with the Seismic analysis. A rigorous analysis was carried out in 2-D limit equilibrium using finite element analysis. The seismic study began with the static analysis, continuing by the dynamic response analysis. The seismic analyses permitted evaluation of the overall patterns of the Latamber dam behavior in terms of displacements, stress, strain, and acceleration fields. Similarly, the seepage analysis allows evaluation of seepage through the foundation and embankment of the dam, while slope stability analysis estimates the factor of safety of the upstream and downstream of the dam. The results of the analysis demonstrate that among multiple geometries, Latamber dam is secure against seepage piping failure and slope stability (upstream and downstream) failure. Moreover, the dam is safe against any dynamic loading and no liquefaction has been observed while changing its geometry in permissible limits.

Keywords: earth-fill dam, finite element, liquefaction, seepage analysis

Procedia PDF Downloads 159
8181 Knowledge Required for Avoiding Lexical Errors at Machine Translation

Authors: Yukiko Sasaki Alam

Abstract:

This research aims at finding out the causes that led to wrong lexical selections in machine translation (MT) rather than categorizing lexical errors, which has been a main practice in error analysis. By manually examining and analyzing lexical errors outputted by a MT system, it suggests what knowledge would help the system reduce lexical errors.

Keywords: machine translation, error analysis, lexical errors, evaluation

Procedia PDF Downloads 329
8180 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices

Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.

Keywords: anthropometry, childhood obesity, gender, morbid obesity

Procedia PDF Downloads 323
8179 Changes in Textural Properties of Zucchini Slices Under Effects of Partial Predrying and Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of any food material during processing is significant for further consumer’s evaluation and directly affects their decisions. Thus any food material should be considered in terms of textural properties after any process. In the present study zucchini slices were partially predried to control and reduce the product’s final oil content. A conventional oven was used for partially dehydration of zucchini slices. Following frying was carried in an industrial fryer having temperature controller. This study was based on the effect of this predrying process on textural properties of fried zucchini slices. Texture profile analysis was performed. Hardness, elasticity, chewiness, cohesiveness were studied texture parameters of fried zucchini slices. Temperature and weight loss were monitored parameters of predrying process, whereas, in frying, oil temperature and process time were controlled. Optimization of two successive processes was done by response surface methodology being one of the common used statistical process optimization tools. Models developed for each texture parameters displayed high success to predict their values as a function of studied processes’ conditions. Process optimization was performed according to target values for each property determined for directly fried zucchini slices taking the highest score from sensory evaluation. Results indicated that textural properties of predried and then fried zucchini slices could be controlled by well-established equations. This is thought to be significant for fried stuff related food industry, where controlling of sensorial properties are crucial to lead consumer’s perception and texture related ones are leaders. This project (113R015) has been supported by TUBITAK.

Keywords: optimization, response surface methodology, texture profile analysis, conventional oven, modelling

Procedia PDF Downloads 432
8178 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal

Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi

Abstract:

The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.

Keywords: low income, subsidy policy, distributed energy resources, energy justice

Procedia PDF Downloads 108
8177 Ideal Posture in Regulating Legal Regulations in Indonesia

Authors: M Jeffri Arlinandes Chandra, Puwaningdyah Murti Wahyuni, Dewi Mutiara M Jeffri Arlinandes Chandra, Puwaningdyah Murti Wahyuni, Dewi Mutiara

Abstract:

Indonesia is a state of the law in accordance with article 1 paragraph 3 of the Constitution of the Republic of Indonesia (1945 Constitution), namely, 'the State of Indonesia is a state of law'. The consequences of the rule of law are making the law as the main commanding officer or making the law as a basis for carrying out an action taken by the state. The types of regulations and procedures for the formation of legislation in Indonesia are contained in Law Number 12 of 2011 concerning the Formation of Legislation. Various attempts were made to make quality regulations both in the formal hierarchy and material hierarchy such as synchronization and harmonization in the formation of laws and regulations so that there is no conflict between equal and hierarchical laws, but the fact is that there are still many conflicting regulations found between one another. This can be seen clearly in the many laws and regulations that were sued to judicial institutions such as the Constitutional Court (MK) and the Supreme Court (MA). Therefore, it is necessary to have a formulation regarding the governance of the formation of laws and regulations so as to minimize the occurrence of lawsuits to the court so that positive law can be realized which can be used today and for the future (ius constituendum). The research method that will be used in this research is a combination of normative research (library research) supported by empirical data from field research so that it can formulate concepts and answer the challenges being faced. First, the structuring of laws and regulations in Indonesia must start from the inventory of laws and regulations, whether they can be classified based on the type of legislation, what are they set about, the year of manufacture, etc. so that they can be clearly traced to the regulations relating to the formation of laws and regulations. Second, the search and revocation/revocation of laws and regulations that do not exist in the state registration system. Third, the periodic evaluation system is carried out at every level of the hierarchy of laws and regulations. These steps will form an ideal model of laws and regulations in Indonesia both in terms of content and material so that the instructions can be codified and clearly inventoried so that they can be accessed by the wider community as a concrete manifestation of the principle that all people know the law (presumptio iures de iure).

Keywords: legislation, review, evaluation, reconstruction

Procedia PDF Downloads 143
8176 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 51