Search results for: diagrams and statistical tables
3493 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 2093492 Effects of Neem (Azadirachta indica A. Juss) Kernel Inclusion in Broiler Diet on Growth Performance, Organ Weight and Gut Morphometry
Authors: Olatundun Bukola Ezekiel, Adejumo Olusoji
Abstract:
A feeding trial was conducted with 100 two-weeks old broiler chicken to evaluate the influence of inclusion in broiler diets at 0, 2.5, 5, 7.5 and 10% neem kernel (used to replace equal quantity of maize) on their performance, organ weight and gut morphometry. The birds were randomly allotted to five dietary treatments, each treatment having four replicates consisting of five broilers in a completely randomized design. The diets were formulated to be iso-nitrogenous (23% CP). Weekly feed intake and changes in body weight were calculated and feed efficiency determined. At the end of the 28-day feeding trial, four broilers per treatment were selected and sacrificed for carcass evaluation. Results were subjected to statistical analysis using the analysis of variance procedures of Statistical Analysis Software The treatment means were presented with group standard errors of means and where significant, were compared using the Duncan multiple range test of the same software. The results showed that broilers fed 2.5% neem kernel inclusion diets had growth performance statistically comparable to those fed the control diet. Birds on 5, 7.5 and 10% neem kernel diets showed significant (P<0.05) increase in relative weight of liver. The absolute weight of spleen also increased significantly (P<0.05) in birds on 10 % neem kernel diet. More than 5 % neem kernel diets gave significant (P<0.05) increase in the relative weight of the kidney. The length of the small intestine significantly increased in birds fed 7.5 and 10% neem kernel diets. Significant differences (P<0.05) did not occur in the length of the large intestine, right and left caeca. It is recommended that neem kernel can be included up to 2.5% in broiler chicken diet without any deleterious effects on the performance and physiological status of the birds.Keywords: broiler chicken, growth performance, gut morphometry, neem kernel, organ weight
Procedia PDF Downloads 7643491 A Proposed Mechanism for Skewing Symmetric Distributions
Authors: M. T. Alodat
Abstract:
In this paper, we propose a mechanism for skewing any symmetric distribution. The new distribution is called the deflation-inflation distribution (DID). We discuss some statistical properties of the DID such moments, stochastic representation, log-concavity. Also we fit the distribution to real data and we compare it to normal distribution and Azzlaini's skew normal distribution. Numerical results show that the DID fits the the tree ring data better than the other two distributions.Keywords: normal distribution, moments, Fisher information, symmetric distributions
Procedia PDF Downloads 6593490 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data
Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda
Abstract:
Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection
Procedia PDF Downloads 1293489 Support Provided by Midwives to Women during Labour in a Public Hospital, Limpopo Province, South Africa: A Participant Observation Study
Authors: Sonto Maputle
Abstract:
Background: Support during labour increase women's chances of having positive childbirth experiences as well as childbirth outcomes. The purpose of this study was to determine the support provided by midwives to women during labour at the public hospital in Limpopo Province. The study was conducted at the Tertiary hospital in Limpopo Province. Methods: A qualitative, participant observation approach was used. Population consisted of all women that were admitted to deliver their babies and the midwives who provided midwifery care in the obstetric unit of one tertiary public hospital in Limpopo Province. Non-probability, purposive and convenience sampling were used to sample 24 women and 12 midwives. Data were collected through participant observations which included unstructured conversations with the use of observational guide, field notes of events and conversations that occurred when women interact with midwives were recorded verbatim and a Visual Analog Scale to complement the observations. Data was analysed qualitatively but were presented in the tables and bar graphs. Results: Five themes emerged as support provided by midwives during labour, namely; communication between women and midwives, informational support, emotional support activities, interpretation of the experienced labour pain and supportive care activities during labour. Conclusion: The communication was occurring when the midwife was rendering midwifery care and very limited for empowering. The information sharing focused on the assistive actions rather than on the activities that would promote mothers’ participation. The emotional support activities indicated lack of respect and disregard cultural preferences and this contributed to inability to exercise choices in decision-making. The study recommended the implementation of Batho Pele principles in order to provide woman-centred care during labour.Keywords: communication between women and midwives, labour pains, informational and emotional support, physical comforting measures
Procedia PDF Downloads 1523488 Impact of Changes in Travel Behavior Triggered by the Covid-19 Pandemic on Tourist Ininfrastructure. Water Reservoirs of the Vltava Cascade (Czechia) Case Study
Authors: Jiří Vágner, Dana Fialová
Abstract:
The Covid-19 pandemic and its effects have triggered significant changes in travel behavior. On the contrary to a deep decline in international tourism, domestic tourism has recovered. It has not fully replaced the total volume of national tourism so far. However, from a regional point of view, and especially according to the type of destinations, regional targeting has changed significantly compared to the previous period. Urban destinations, which used to be the domain of foreign tourists, have been relatively orphaned, in contrast to destinations tied to natural attractions, which have seen seasonal increases. Even here, at a lower hierarchical geographic level, we can observe the differentiation resulting from the existing localization and infrastructure. The case study is focused on the three largest water reservoirs of the Vltava Cascade in Czechia– Lipno, Orlík, and Slapy. Based on a detailed field survey, in the periods before and during the pandemic, as well as available statistical data (Tourdata; Czech Statistical Office, Czech Cadaster and Ordnance Survey), different trends in the exploitation of these destinations with regard to existing or planned infrastructure are documented, analyzed and explained. This gives us the opportunity to discuss on concrete examples of generally known phenomena that are usually neglected in tourism: slum, brownfield, greenfield. Changes in travel behavior – especially the focus on spending leisure time individually in naturally attractive destinations – can affect the use of sites, which can be defined as a tourist or recreational slum, brownfield, but also as a tourist greenfield development. Sociocultural changes and perception of destinations by tourists and other actors represent, besides environmental changes, major trends in current tourism.Keywords: Covid-19 pandemic, czechia, sociocultural and environmental impacts, tourist infrastructure, travel behavior, the Vltava Cascade water reservoirs
Procedia PDF Downloads 1463487 Effectiveness of Impairment Specified Muscle Strengthening Programme in a Group of Disabled Athletes
Authors: A. L. I. Prasanna, E. Liyanage, S. A. Rajaratne, K. P. A. P. Kariyawasam, A. A. J. Rajaratne
Abstract:
Maintaining or improving the muscle strength of the injured body part is essential to optimize performance among disabled athletes. General conditioning and strengthening exercises might be ineffective if not sufficiently intense enough or targeted for each participant’s specific impairment. Specific strengthening programme, targeted to the affected body part, are essential to improve the strength of impaired muscles and increase in strength will help reducing the impact of disability. Methods: The muscle strength of hip, knee and ankle joints was assessed in a group of randomly selected disabled athletes, using the Medical Research Council (MRC) grading. Those having muscle strength of grade 4 or less were selected for this study (24 in number) and were given and a custom made exercise program designed to strengthen their hip, knee or ankle joint musculature, according to the muscle or group of muscles affected. Effectiveness of the strengthening program was assessed after a period of 3 months. Results: Statistical analysis was done using the Minitab 16 statistical software. A Mann-Whitney U test was used to compare the strength of muscle group before and after exercise programme. A significant difference was observed after the three month strengthening program for knee flexors (Left and Right) (P =0.0889, 0.0312) hip flexors (left and right) (P=0.0312, 0.0466), hip extensors (Left and Right) (P=0.0478, 0.0513), ankle plantar flexors (Left and Right) (P=0.0466, 0.0423) and right ankle dorsiflexors (P= 0.0337). No significant difference of strength was observed after the strengthening program in the knee extensors (left and right), hip abductors (left and right) and left ankle dorsiflexors. Conclusion: Impairment specific exercise programme appear to be beneficial for disabled athletes to significantly improve the muscle strength of the affected joints.Keywords: muscle strengthening programme, disabled athletes, physiotherapy, rehabilitation sciences
Procedia PDF Downloads 3573486 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus
Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi
Abstract:
In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.Keywords: bitumen, crumb rubber, ethylene vinyl acetate, FT wax
Procedia PDF Downloads 1733485 Information Visualization Methods Applied to Nanostructured Biosensors
Authors: Osvaldo N. Oliveira Jr.
Abstract:
The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique
Procedia PDF Downloads 3373484 Determination of Anti-Fungal Activity of Cedrus deodara Oil against Oligoporus placentus, Trametes versicolor and Xylaria acuminata on Populus deltoids
Authors: Sauradipta Ganguly, Akhato Sumi, Sanjeet Kumar Hom, Ajan T. Lotha
Abstract:
Populus deltoides is a hardwood used predominantly for the manufacturing of plywood, matchsticks, and paper in India and hence has a higher economical significance. Wood-decaying fungi cause serious damage to Populus deltoides products, as the wood itself is perishable and vulnerable to decaying agents, decreasing their aesthetical value which in return results in significant monetary loss for the wood industries concerned. The aim of the study was to determine the antifungal activity of Cedrus deodara oil against three primary wood-decaying fungi namely white-rot fungi (Trametes versicolor), brown-rot fungi (Oligoporus placentus) and soft-rot fungi (Xylaria acuminata) on Populus deltoides samples under optimum laboratory conditions. The susceptibility of Populus deltoides samples on the fungal attack and the ability of deodar oil to control colonization of the wood rotting fungi on the samples were assessed. Three concentrations of deodar oil were considered for the study as treating solutions, i.e., 4%, 5%, and 6%. The Populus deltoides samples were treated with treating solutions, and the ability of the same to prevent a fungal attack on the samples were assessed using accelerated test in the laboratory at Biochemical Oxygen Demand incubator at temperature (25 ± 2°C) and relative humidity 70 ± 4%. Efficacy test and statistical analysis of deodar oil against Trametes versicolor, Oligoporus placentus, and Xylariaacuminataon P. deltoides samples exhibited light, minor and negligible mycelia growth at 4 %, 5% and 6% concentrations of deodar oil, respectively. Whereas, moderate to heavy attack was observed on the surface of the control samples. Statistical analysis further established that the treatments were statistically significant and had significantly inhibited fungal growth of all the three fungus spp by almost 3 to 5 times.Keywords: populus deltoides, Trametes versicolor, Oligoporus placentus, Xylaria acuminata, Deodar oil, treatment
Procedia PDF Downloads 1253483 Examining Social Connectivity through Email Network Analysis: Study of Librarians' Emailing Groups in Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
Social platforms like online discussion and mailing groups are well aligned with academic as well as professional learning spaces. Professional communities are increasingly moving to online forums for sharing and capturing the intellectual abilities. This study investigated dynamics of social connectivity of yahoo mailing groups of Pakistani Library and Information Science (LIS) professionals using Graph Theory technique. Design/Methodology: Social Network Analysis is the increasingly concerned domain for scientists in identifying whether people grow together through online social interaction or, whether they just reflect connectivity. We have conducted a longitudinal study using Network Graph Theory technique to analyze the large data-set of email communication. The data was collected from three yahoo mailing groups using network analysis software over a period of six months i.e. January to June 2016. Findings of the network analysis were reviewed through focus group discussion with LIS experts and selected respondents of the study. Data were analyzed in Microsoft Excel and network diagrams were visualized using NodeXL and ORA-Net Scene package. Findings: Findings demonstrate that professionals and students exhibit intellectual growth the more they get tied within a network by interacting and participating in communication through online forums. The study reports on dynamics of the large network by visualizing the email correspondence among group members in a network consisting vertices (members) and edges (randomized correspondence). The model pair wise relationship between group members was illustrated to show characteristics, reasons, and strength of ties. Connectivity of nodes illustrated the frequency of communication among group members through examining node coupling, diffusion of networks, and node clustering has been demonstrated in-depth. Network analysis was found to be a useful technique in investigating the dynamics of the large network.Keywords: emailing networks, network graph theory, online social platforms, yahoo mailing groups
Procedia PDF Downloads 2393482 Analysis of NMDA Receptor 2B Subunit Gene (GRIN2B) mRNA Expression in the Peripheral Blood Mononuclear Cells of Alzheimer's Disease Patients
Authors: Ali̇ Bayram, Semih Dalkilic, Remzi Yigiter
Abstract:
N-methyl-D-aspartate (NMDA) receptor is a subtype of glutamate receptor and plays a pivotal role in learning, memory, neuronal plasticity, neurotoxicity and synaptic mechanisms. Animal experiments were suggested that glutamate-induced excitotoxic injuriy and NMDA receptor blockage lead to amnesia and other neurodegenerative diseases including Alzheimer’s disease (AD), Huntington’s disease, amyotrophic lateral sclerosis. Aim of this study is to investigate association between NMDA receptor coding gene GRIN2B expression level and Alzheimer disease. The study was approved by the local ethics committees, and it was conducted according to the principles of the Declaration of Helsinki and guidelines for the Good Clinical Practice. Peripheral blood was collected 50 patients who diagnosed AD and 49 healthy control individuals. Total RNA was isolated with RNeasy midi kit (Qiagen) according to manufacturer’s instructions. After checked RNA quality and quantity with spectrophotometer, GRIN2B expression levels were detected by quantitative real time PCR (QRT-PCR). Statistical analyses were performed, variance between two groups were compared with Mann Whitney U test in GraphpadInstat algorithm with 95 % confidence interval and p < 0.05. After statistical analyses, we have determined that GRIN2B expression levels were down regulated in AD patients group with respect to control group. But expression level of this gene in each group was showed high variability. İn this study, we have determined that NMDA receptor coding gene GRIN2B expression level was down regulated in AD patients when compared with healthy control individuals. According to our results, we have speculated that GRIN2B expression level was associated with AD. But it is necessary to validate these results with bigger sample size.Keywords: Alzheimer’s disease, N-methyl-d-aspartate receptor, NR2B, GRIN2B, mRNA expression, RT-PCR
Procedia PDF Downloads 3943481 Count Data Regression Modeling: An Application to Spontaneous Abortion in India
Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan
Abstract:
Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression
Procedia PDF Downloads 1553480 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain
Authors: Arash Yousefi
Abstract:
Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.Keywords: incidents, safety, analysis, frequency, severity, injuries, determine
Procedia PDF Downloads 913479 Bright Light Effects on the Concentration and Diffuse Attention Reaction Time, Tension, Angry, Fatigue and Alertness among Shift Workers
Authors: Mohammad Imani, JabraeilNasl Seraji, Abolfazl Zakerian
Abstract:
Background: Reaction time is the amount of time it takes to respond to a stimulus. In fact The time that passes between the introduction of a stimulus and the reaction by the subject to that stimulus. The aim of this interventional study is evaluation of bright light effects on concentration and diffuse attention reaction time, tension, angry, fatigue and alertness among shift workers. There are several incentives that can reduce the reaction time or added. Bright light as one of the environmental factors can reduce reaction time. Material &Method: This cross-sectional descriptive study was conducted in 1391, in 88 subjects (44 Fixed morning worker and 44 shift worker ) In a 24 h time (13-16-19-22-1-4-7-10) in an ordinary light situation after a randomly selected sample size calculation, concentration and diffuse attention test (reaction time) has been done. After intervention and using of bright light (4500lux), again reaction time test was done. After analyzing by ElISA method obtained data were analyzed by statistical software SPSS 19 and using T-test and ANOVA statistical analysis. Results: Between average of reaction time tests in ordinary light exposed to fixed morning workers and bright light exposed to shift worker, with 95% CI, (P>%5) there was no significant relationship. After the intervention and the use of bright light (4500 lux),between average of concentration and diffused attention reaction time tests in ordinary light exposure on the fixed morning workers and bright light exposure shift workers with 95% CI, (P<5%) there was significant relationship. Conclusion: In sometimes of 24 h during ordinary light exposure concentration and diffused attention reaction time has changed in shift workers. After intervention, during bright light (4500lux) exposure as a light shower, focused and diffuse attention reaction time, tension ,angry and fatigue decreased.Keywords: bright light, reaction time, tension, angry, fatigue, alertness
Procedia PDF Downloads 3853478 Variability of Energy Efficiency with the Application of Technologies Embedded in Locomotives of a Heavy Haul Railway: Case Study of Vitoria Minas Railway, Brazil
Authors: Eric Wilson Santos Cabral, Marta Monteiro Da Costa Cruz, Rodrigo Pirola Pestana, Vivian Andréa Parreira
Abstract:
In the transportation sector in Brazil, there is a great challenge that is the maintenance of profit in the face of the great variation in the price of diesel. This directly affects the variable cost of transport companies. Within the railways, part of the great challenges is to overcome the annual budget, cargo and ore transported, thus reducing costs compared to previous years, becoming more efficient each year. Within this scenario, the railway companies are looking for effective measures, aiming at reducing the ratio of liter of diesel consumed by KTKB (Kilometer Gross Ton multiplied by thousand). This ratio represents the indicator of energy efficiency of some railroads in Brazil and in other countries. In this study, we sought to analyze the behavior of the energy efficiency indicator on two parts: The first, with the application of technologies used in locomotives, such as the start-stop system of the diesel engine and the system of tracking and monitoring of fuel. The second, evaluation of the behavior of the variation of the type of cargo transported (loading mix). The study focused on locomotive technology will be carried out using statistical analysis, behavioral evaluation in different operating conditions, such as maneuvers for trains, service trains and freight trains. The analysis will also cover the evaluation of the loading mix made using statistical analysis of the existing railroad database, comparing the energy efficiency per loading mine and type of product. With the completion of this study, the railway undertakings should be able to better target decision-making in order to achieve substantial reductions in transport costs.Keywords: railway transport, energy efficiency, railway technology, fuel consumption
Procedia PDF Downloads 3043477 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 1463476 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.Keywords: automotive Industry, industry 4.0, Internet of Things, IATF 16949:2016, measurement system analysis
Procedia PDF Downloads 2143475 Evaluation of Nuts as a Source of Selenium in Diet
Authors: Renata Markiewicz-Żukowska, Patryk Nowakowski, Sylwia K. Naliwajko, Jakub M. Bołtryk, Katarzyna Socha, Anna Puścion-Jakubik, Jolanta Soroczyńska, Maria H. Borawska
Abstract:
Selenium (Se) is an essential element for human health. As an integral part of glutathione peroxidase, it has antioxidant, anti-inflammatory and anticancer activities. Unfortunately, Se dietary intake is often insufficient, especially in regions where the soil is low in Se. Therefore, in search for good sources of Se, the content of this element in food products should be monitored. Food product can be considered as a source of Se when its standard portion covers above 15% of recommended daily allowance. In the case of nuts, 42g is recognized as the standard portion. The aim of this study was to determine the Se content in nuts and to answer the question of whether the studied nuts can be considered as a source of Se in the diet. The material for the study consisted of 10 types of nuts (12 samples of each one): almonds, Brazil nuts, cashews, hazelnuts, macadamia nuts, peanuts, pecans, pine nuts, pistachios and walnuts. The nuts were mineralized using microwave technique (Berghof, Germany). The content of Se was determined by atomic absorption spectrometry method with electrothermal atomization in a graphite tube with Zeeman background correction (Hitachi, Japan). The accuracy of the method was verified on certified reference material: Simulated Diet D. The statistical analysis was performed using Statistica v. 13.0 software. Statistical significance was determined at p < 0.05 level. The highest content of Se was found in Brazil nuts (4566.21 ± 3393.9 µg/kg) and the lowest in almonds (36.07 ± 18.8 µg/kg). A standard portion (42g) of almonds, brazil nuts, cashews, hazelnuts, macadamia nuts, peanuts, pecans, pine nuts, pistachios and walnuts covers the recommended daily allowance for Se respectively in: 2, 192, 28, 2, 16, 7, 4, 3, 12, 6%. Brazil nuts, cashews and macadamia nuts can be considered as a good source of Se in diet.Keywords: atomic absorption spectrometry, diet, nuts, selenium
Procedia PDF Downloads 1853474 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 1693473 Mediation Analysis of the Efficacy of the Nimotuzumab-Cisplatin-Radiation (NCR) Improve Overall Survival (OS): A HPV Negative Oropharyngeal Cancer Patient (HPVNOCP) Cohort
Authors: Akshay Patil
Abstract:
Objective: Mediation analysis identifies causal pathways by testing the relationships between the NCR, the OS, and an intermediate variable that mediates the relationship between the Nimotuzumab-cisplatin-radiation (NCR) and OS. Introduction: In randomized controlled trials, the primary interest is in the mechanisms by which an intervention exerts its effects on the outcomes. Clinicians are often interested in how the intervention works (or why it does not work) through hypothesized causal mechanisms. In this work, we highlight the value of understanding causal mechanisms in randomized trial by applying causal mediation analysis in a randomized trial in oncology. Methods: Data was obtained from a phase III randomized trial (Subgroup of HPVNOCP). NCR is reported to significantly improve the OS of patients locally advanced head and neck cancer patients undergoing definitive chemoradiation. Here, based on trial data, the mediating effect of NCR on patient overall survival was systematically quantified through progression-free survival(PFS), disease free survival (DFS), Loco-regional failure (LRF), and the disease control rate (DCR), Overall response rate (ORR). Effects of potential mediators on the HR for OS with NCR versus cisplatin-radiation (CR) were analyzed by Cox regression models. Statistical analyses were performed using R software Version 3.6.3 (The R Foundation for Statistical Computing) Results: Effects of potential mediator PFS was an association between NCR treatment and OS, with an indirect-effect (IE) 0.76(0.62 – 0.95), which mediated 60.69% of the treatment effect. Taking into account baseline confounders, the overall adjusted hazard ratio of death was 0.64 (95% CI: 0.43 – 0.96; P=0.03). The DFS was also a significant mediator and had an IE 0.77 (95% CI; 0.62-0.93), 58% mediated). Smaller mediation effects (maximum 27%) were observed for LRF with IE 0.88(0.74 – 1.06). Both DCR and ORR mediated 10% and 15%, respectively, of the effect of NCR vs. CR on the OS with IE 0.65 (95% CI; 0.81 – 1.08) and 0.94(95% CI; 0.79 – 1.04). Conclusion: Our findings suggest that PFS and DFS were the most important mediators of the OS with nimotuzumab to weekly cisplatin-radiation in HPVNOCP.Keywords: mediation analysis, cancer data, survival, NCR, HPV negative oropharyngeal
Procedia PDF Downloads 1453472 Choosing an Optimal Epsilon for Differentially Private Arrhythmia Analysis
Authors: Arin Ghazarian, Cyril Rakovski
Abstract:
Differential privacy has become the leading technique to protect the privacy of individuals in a database while allowing useful analysis to be done and the results to be shared. It puts a guarantee on the amount of privacy loss in the worst-case scenario. Differential privacy is not a toggle between full privacy and zero privacy. It controls the tradeoff between the accuracy of the results and the privacy loss using a single key parameter calledKeywords: arrhythmia, cardiology, differential privacy, ECG, epsilon, medi-cal data, privacy preserving analytics, statistical databases
Procedia PDF Downloads 1523471 Enhancing Cultural Heritage Data Retrieval by Mapping COURAGE to CIDOC Conceptual Reference Model
Authors: Ghazal Faraj, Andras Micsik
Abstract:
The CIDOC Conceptual Reference Model (CRM) is an extensible ontology that provides integrated access to heterogeneous and digital datasets. The CIDOC-CRM offers a “semantic glue” intended to promote accessibility to several diverse and dispersed sources of cultural heritage data. That is achieved by providing a formal structure for the implicit and explicit concepts and their relationships in the cultural heritage field. The COURAGE (“Cultural Opposition – Understanding the CultuRal HeritAGE of Dissent in the Former Socialist Countries”) project aimed to explore methods about socialist-era cultural resistance during 1950-1990 and planned to serve as a basis for further narratives and digital humanities (DH) research. This project highlights the diversity of flourished alternative cultural scenes in Eastern Europe before 1989. Moreover, the dataset of COURAGE is an online RDF-based registry that consists of historical people, organizations, collections, and featured items. For increasing the inter-links between different datasets and retrieving more relevant data from various data silos, a shared federated ontology for reconciled data is needed. As a first step towards these goals, a full understanding of the CIDOC CRM ontology (target ontology), as well as the COURAGE dataset, was required to start the work. Subsequently, the queries toward the ontology were determined, and a table of equivalent properties from COURAGE and CIDOC CRM was created. The structural diagrams that clarify the mapping process and construct queries are on progress to map person, organization, and collection entities to the ontology. Through mapping the COURAGE dataset to CIDOC-CRM ontology, the dataset will have a common ontological foundation with several other datasets. Therefore, the expected results are: 1) retrieving more detailed data about existing entities, 2) retrieving new entities’ data, 3) aligning COURAGE dataset to a standard vocabulary, 4) running distributed SPARQL queries over several CIDOC-CRM datasets and testing the potentials of distributed query answering using SPARQL. The next plan is to map CIDOC-CRM to other upper-level ontologies or large datasets (e.g., DBpedia, Wikidata), and address similar questions on a wide variety of knowledge bases.Keywords: CIDOC CRM, cultural heritage data, COURAGE dataset, ontology alignment
Procedia PDF Downloads 1473470 Knowledge, Attitude, and Practice of Physical Activity among Adults in Alimosho Local Government Area
Authors: Elizabeth Adebomi Akinlotan, Olukemi Odukoya
Abstract:
INTRODUCTION: Physical Activity is defined as activity that involves bodily movement which is done as a part of daily activity in the form of working, playing, active transportation such as walking and also as a form of recreational activity. Physical inactivity has been identified as the fourth leading risk factor for global mortality and morbidity causing an estimated 3.2 million deaths globally and 5.5% of total deaths and it remains a pressing public health issue. There is a shift in the major causes of death from communicable to non-communicable diseases in many developed countries and this is fast becoming the case in developing countries. Physical activity is an important determinant of health and has been associated with lower mortality rates as it reduces the risk of developing chronic diseases such as diabetes mellitus, hypertension, stroke, cancer and osteoporosis. It improves musculoskeletal health, controls weight and reduces symptoms of depression. AIM: The aim is to study the knowledge, attitude and practices of physical activity among adults in Alimosho local government area. METHODOLOGY: This was a descriptive cross sectional survey designed to study the knowledge, attitude and practice of physical activity among adults in Alimosho Local Government Area. The study population were 250 adults aged 18-65 who were residents of the area of more than 6 months duration and had no chronic disease condition or physical disability. A multistage sampling method was used to select the respondents and data was collected using interviewer administered questionnaires. The data was analyzed with the use of EPI-info 2007 statistical software. Chi Square was thereafter used to test the association between selected variables. The level of statistical significance was set at 5% (p<0.05). RESULTS: In general, majority (61.6%) of the respondents had a good knowledge of what physical activity entails, 34.0% had fair knowledge and 4.4% had poor knowledge. There was a favorable attitude towards physical activity among the respondents with 82.4% having an overall positive attitude. Below a third of the respondents (26.4%) reported having a high physical activity (METS > 3001) while 40.0% had moderate (601-3000 METS) levels of activity and 33.6% were inactive (<600METS). There is statistical significance between the gender of the respondent and the levels of physical activity (p=0.0007); 75.2% males reached the minimum recommendations while 24.8% were inactive and 55.0% females reached the minimum recommendations while 45.0% were inactive. Results also showed that of 95 respondents who were satisfied with their levels of physical activity, 33.7% were insufficiently active while 66.3% were either minimally active or highly active and of 110 who were unsatisfied with their levels of physical activity, 72.0% were above the minimum recommendations while 38.0% were insufficiently active. CONCLUSION: In contrast to the high level of knowledge and favorable attitude towards physical activity, there was a lower level of practice of high or moderate physical activities. It is recommended that more awareness should be created on the recommended levels of physical activity especially for the vigorous intensity and moderate intensity physical activity.Keywords: METS, physical activity, physical inactivity, public health
Procedia PDF Downloads 2333469 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 3033468 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia
Authors: Nino Paresashvili, Nino Abesadze
Abstract:
The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.Keywords: unemployment, analysis, methods, tendencies, regulation mechanisms
Procedia PDF Downloads 3783467 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 3663466 Statistical Approach to Identify Stress and Biases Impairing Decision-Making in High-Risk Industry
Authors: Ph. Fauquet-Alekhine
Abstract:
Decision-making occurs several times an hour when working in high risk industry and an erroneous choice might have undesirable outcomes for people and the environment surrounding the industrial plant. Industrial decisions are very often made in a context of acute stress. Time pressure is a crucial stressor leading decision makers sometimes to boost up the decision-making process and if it is not possible then shift to the simplest strategy. We thus found it interesting to update the characterization of the stress factors impairing decision-making at Chinon Nuclear Power Plant (France) in order to optimize decision making contexts and/or associated processes. The investigation was based on the analysis of reports addressing safety events over the last 3 years. Among 93 reports, those explicitly addressing decision-making issues were identified. Characterization of each event was undertaken in terms of three criteria: stressors, biases impairing decision making and weaknesses of the decision-making process. The statistical analysis showed that biases were distributed over 10 possibilities among which the hypothesis confirmation bias was clearly salient. No significant correlation was found between criteria. The analysis indicated that the main stressor was time pressure and highlights an unexpected form of stressor: the trust asymmetry principle of the expert. The analysis led to the conclusion that this stressor impaired decision-making from a psychological angle rather than from a physiological angle: it induces defensive bias of self-esteem, self-protection associated with a bias of confirmation. This leads to the hypothesis that this stressor can intervene in some cases without being detected, and to the hypothesis that other stressors of the same kind might occur without being detected too. Further investigations addressing these hypotheses are considered. The analysis also led to the conclusion that dealing with these issues implied i) decision-making methods being well known to the workers and automated and ii) the decision-making tools being well known and strictly applied. Training was thus adjusted.Keywords: bias, expert, high risk industry, stress.
Procedia PDF Downloads 1123465 Water Scarcity in the Gomti Nagar Area under the Impact of Climate Changes and Assessment for Groundwater Management
Authors: Rajkumar Ghosh
Abstract:
Climate change has led to decreased water availability in the Gomti Nagar area of Uttar Pradesh, India. Climate change has reduced the amount of precipitation and increased the rate of evaporation. The region is heavily reliant on surface water sources (Gomti river, Sharda Canal) and groundwater. Efficient management of groundwater resources is crucial for addressing water shortages. These may include: Exploring alternative water sources, such as wastewater recycling and desalination, can help augment water supply and reduce dependency on rainfall-dependent sources. Promoting the use of water-efficient technologies in industries, agriculture, and water-efficient infrastructure in urban areas can contribute to reducing water demand and optimizing water use. Incorporating climate change considerations into urban planning and infrastructure development can help ensure water security in the face of future climate uncertainties. Addressing water scarcity in the Gomti Nagar area requires a multi-pronged approach that combines sustainable groundwater management practices, climate change adaptation strategies, and integrated water resource management. By implementing these measures, the region can work towards ensuring a more sustainable and reliable water supply in the context of climate change. Water is the most important natural resource for the existence of living beings in the Earth's ecosystem. On Earth, 1.2 percent of the water is drinkable, but only 0.3 percent is usable by people. Water scarcity is a growing concern in India due to the impact of climate change and over-exploitation of water resources. Excess groundwater withdrawal causes regular declines in groundwater level. Due to city boundary expansion and growing urbanization, the recharge point for groundwater tables is decreasing. Rainwater infiltration into the subsoil is also reduced by unplanned, uneven settlements in urban change.Keywords: climate change, water scarcity, groundwater, rainfall, water supply
Procedia PDF Downloads 833464 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 496