Search results for: hazard prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2790

Search results for: hazard prediction

1620 Comparing Groundwater Fluoride Level with WHO Guidelines and Classifying At-Risk Age Groups; Based on Health Risk Assessment

Authors: Samaneh Abolli, Kamyar Yaghmaeian, Ali Arab Aradani, Mahmood Alimohammadi

Abstract:

The main route of fluoride uptake is drinking water. Fluoride absorption in the acceptable range (0.5-1.5 mg L-¹) is suitable for the body, but it's too much consumption can have irreversible health effects. To compare fluoride concentration with the WHO guidelines, 112 water samples were taken from groundwater aquifers in 22 villages of Garmsar County, the central part of Iran, during 2018 to 2019.Fluoride concentration was measured by the SPANDS method, and its non-carcinogenic impacts were calculated using EDI and HQ. The statistical population was divided into four categories of infant, children, teenagers, and adults. Linear regression and Spearman rank correlation coefficient tests were used to investigate the relationships between the well's depth and fluoride concentration in the water samples. The annual mean concentrations of fluoride in 2018 and2019 were 0.75 and 0.64 mg -¹ and, the fluoride mean concentration in the samples classifying the cold and hot seasons of the studied years was 0.709 and 0.689 mg L-¹, respectively. The amount of fluoride in 27% of the samples in both years was less than the acceptable minimum (0.5 mg L-¹). Also, 11% of the samples in2018 (6 samples) had fluoride levels higher than 1.5 mg L-¹. The HQ showed that the children were vulnerable; teenagers and adults were in the next ranks, respectively. Statistical tests showed a reverse and significant correlation (R2 = 0.02, < 0.0001) between well depth and fluoride content. The border between the usefulness/harmfulness of fluoride is very narrow and requires extensive studies.

Keywords: fluoride, groundwater, health risk assessment, hazard quotient, Garmsar

Procedia PDF Downloads 67
1619 Causes and Effects of the 2012 Flood Disaster on Affected Communities in Nigeria

Authors: Abdulquadri Ade Bilau, Richard Ajayi Jimoh, Adejoh Amodu Adaji

Abstract:

The increasing exposures to natural hazards have continued to severely impair on the built environment causing huge fatalities, mass damage and destruction of housing and civil infrastructure while leaving psychosocial impacts on affected communities. The 2012 flood disaster in Nigeria which affected over 7 million inhabitants in 30 of the 36 states resulted in 363 recorded fatalities with about 600,000 houses and a number of civil infrastructure damaged or destroyed. In Kogi State, over 500 thousand people were displaced in 9 out of the 21 local government affected while Ibaji and Lokoja local governments were worst hit. This study identifies the causes and 2012 flood disasters and its effect on housing and livelihood. Personal observation and questionnaire survey were instruments used in carrying out the study and data collected were analysed using descriptive statistical tool. Findings show that the 2012 flood disaster was aided by the gap in hydrological data, sudden dam failure, and inadequate drainage capacity to reduce flood risk. The study recommends that communities residing along the river banks in Lokoja and Ibaji LGAs must be adequately educated on their exposure to flood hazard and mitigation and risk reduction measures such as construction of adequate drainage channel are constructed in affected communities.

Keywords: flood, hazards, housing, risk reduction, vulnerability

Procedia PDF Downloads 255
1618 Corrosivity of Smoke Generated by Polyvinyl Chloride and Polypropylene with Different Mixing Ratios towards Carbon Steel

Authors: Xufei Liu, Shouxiang Lu, Kim Meow Liew

Abstract:

Because a relatively small fire could potentially cause damage by smoke corrosion far exceed thermal fire damage, it has been realized that the corrosion of metal exposed to smoke atmospheres is a significant fire hazard, except for toxicity or evacuation considerations. For the burning materials in an actual fire may often be the mixture of combustible matters, a quantitative study on the corrosivity of smoke produced by the combustion of mixture is more conducive to the application of the basic theory to the actual engineering. In this paper, carbon steel samples were exposed to smoke generated by polyvinyl chloride and polypropylene, two common combustibles in industrial plants, with different mixing ratios in high humidity for 120 hours. The separate and combined corrosive effects of smoke were examined subsequently by weight loss measurement, scanning electron microscope, energy dispersive spectroscopy and X-ray diffraction. It was found that, although the corrosivity of smoke from polypropylene was much smaller than that of smoke from polyvinyl chloride, smoke from polypropylene enhanced the major corrosive effect of smoke from polyvinyl chloride to carbon steel. Furthermore, the corrosion kinetics of carbon steel under smoke were found to obey the power function. Possible corrosion mechanisms were also proposed. All the analysis helps to provide basic information for the determination of smoke damage and timely rescue after fire.

Keywords: corrosion kinetics, corrosion mechanism, mixed combustible, SEM/EDS, smoke corrosivity, XRD

Procedia PDF Downloads 206
1617 A Compressor Map Optimizing Tool for Prediction of Compressor Off-Design Performance

Authors: Zhongzhi Hu, Jie Shen, Jiqiang Wang

Abstract:

A high precision aeroengine model is needed when developing the engine control system. Compared with other main components, the axial compressor is the most challenging component to simulate. In this paper, a compressor map optimizing tool based on the introduction of a modifiable β function is developed for FWorks (FADEC Works). Three parameters (d density, f fitting coefficient, k₀ slope of the line β=0) are introduced to the β function to make it modifiable. The comparison of the traditional β function and the modifiable β function is carried out for a certain type of compressor. The interpolation errors show that both methods meet the modeling requirements, while the modifiable β function can predict compressor performance more accurately for some areas of the compressor map where the users are interested in.

Keywords: beta function, compressor map, interpolation error, map optimization tool

Procedia PDF Downloads 261
1616 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 167
1615 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 139
1614 Typhoon Disaster Risk Assessment of Mountain Village: A Case Study of Shanlin District in Kaohsiung

Authors: T. C. Hsu, H. L. Lin

Abstract:

Taiwan is mountainous country, 70% of land is covered with mountains. Because of extreme climate, the mountain villages with sensitive and fragile environment often get easily affected by inundation and debris flow from typhoon which brings huge rainfall. Due to inappropriate development, overuse and fewer access roads, occurrence of disaster becomes more frequent through downpour and rescue actions are postponed. However, risk map is generally established through administrative boundaries, the difference of urban and rural area is ignored. The neglect of mountain village characteristics eventually underestimates the importance of factors related to vulnerability and reduces the effectiveness. In disaster management, there are different strategies and actions at each stage. According to different tasks, there will be different risk indices and weights to analyze disaster risk for each stage and then it will contribute to confront threat and reduce impact appropriately on right time. Risk map is important in mitigation, but also in response stage because some factors such as road network will be changed by disaster. This study will use risk assessment to establish risk map of Shanlin District which is mountain village in Kaohsiung as a case study in mitigation and response stage through Analytic Hierarchy Process (AHP). AHP helps to recognize the composition and weights of risk factors in mountain village by experts’ opinions through survey design and is combined with present potential hazard map to produce risk map.

Keywords: risk assessment, mountain village, risk map, analytic hierarchy process

Procedia PDF Downloads 395
1613 Lessons Learnt from Moment Magnitude 7.8 Gorkha, Nepal Earthquake

Authors: Narayan Gurung, Fawu Wang, Ranjan Kumar Dahal

Abstract:

Nepal is highly prone to earthquakes and has witnessed at least one major earthquake in 80 to 90 years interval. The Gorkha earthquake, that measured 7.8 RS in magnitude and struck Nepal on 25th April 2015, after 81 years since Mw 8.3 Nepal Bihar earthquake in 1934, was the largest earthquake after Mw 8.3 Nepal Bihar earthquake. In this paper, an attempt has been made to highlight the lessons learnt from the MwW 7.8 Gorkha (Nepal) earthquake. Several types of damage patterns in buildings were observed for reinforced concrete buildings, as well as for unreinforced masonry and adobe houses in the earthquake of 25 April 2015. Many field visits in the affected areas were conducted, and thus, associated failure and damage patterns were identified and analyzed. Damage patterns in non-engineered buildings, middle and high-rise buildings, commercial complexes, administrative buildings, schools and other critical facilities are also included from the affected districts. For most buildings, the construction and structural deficiencies have been identified as the major causes of failure; however, topography, local soil amplification, foundation settlement, liquefaction associated damages and buildings built in hazard-prone areas were also significantly observed for the failure or damages to buildings and hence are reported. Finally, the lessons learnt from Mw 7.8 Gorkha (Nepal) earthquake are presented in order to mitigate impacts of future earthquakes in Nepal.

Keywords: Gorkha earthquake, reinforced concrete structure, Nepal, lesson learnt

Procedia PDF Downloads 195
1612 Natural Gas Production Forecasts Using Diffusion Models

Authors: Md. Abud Darda

Abstract:

Different options for natural gas production in wide geographic areas may be described through diffusion of innovation models. This type of modeling approach provides an indirect estimate of an ultimately recoverable resource, URR, capture the quantitative effects of observed strategic interventions, and allow ex-ante assessments of future scenarios over time. In order to ensure a sustainable energy policy, it is important to forecast the availability of this natural resource. Considering a finite life cycle, in this paper we try to investigate the natural gas production of Myanmar and Algeria, two important natural gas provider in the world energy market. A number of homogeneous and heterogeneous diffusion models, with convenient extensions, have been used. Models validation has also been performed in terms of prediction capability.

Keywords: diffusion models, energy forecast, natural gas, nonlinear production

Procedia PDF Downloads 222
1611 Integration of Microarray Data into a Genome-Scale Metabolic Model to Study Flux Distribution after Gene Knockout

Authors: Mona Heydari, Ehsan Motamedian, Seyed Abbas Shojaosadati

Abstract:

Prediction of perturbations after genetic manipulation (especially gene knockout) is one of the important challenges in systems biology. In this paper, a new algorithm is introduced that integrates microarray data into the metabolic model. The algorithm was used to study the change in the cell phenotype after knockout of Gss gene in Escherichia coli BW25113. Algorithm implementation indicated that gene deletion resulted in more activation of the metabolic network. Growth yield was more and less regulating gene were identified for mutant in comparison with the wild-type strain.

Keywords: metabolic network, gene knockout, flux balance analysis, microarray data, integration

Procedia PDF Downloads 575
1610 Numerical Prediction of Wall Eroded Area by Cavitation

Authors: Ridha Zgolli, Ahmed Belhaj, Maroua Ennouri

Abstract:

This study presents a new method to predict cavitation area that may be eroded. It is based on the post-treatment of URANS simulations in cavitant flows. The most RANS calculations with incompressible consideration are based on cavitation model using mixture fluid with density (ρm) calculated as a function of liquid density (ρliq), vapour or gas density (ρvap) and vapour or gas volume fraction α (ρm = αρvap + (1-α) ρliq). The calculations are performed on hydrofoil geometries and compared with experimental works concerning flows characteristics (size of pocket, pressure, velocity). We present here the used cavitation model and the approach followed to evaluate the value of α fixing the shape of pocket around wall before collapsing.

Keywords: flows, CFD, cavitation, erosion

Procedia PDF Downloads 335
1609 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 232
1608 Ground Motion Modelling in Bangladesh Using Stochastic Method

Authors: Mizan Ahmed, Srikanth Venkatesan

Abstract:

Geological and tectonic framework indicates that Bangladesh is one of the most seismically active regions in the world. The Bengal Basin is at the junction of three major interacting plates: the Indian, Eurasian, and Burma Plates. Besides there are many active faults within the region, e.g. the large Dauki fault in the north. The country has experienced a number of destructive earthquakes due to the movement of these active faults. Current seismic provisions of Bangladesh are mostly based on earthquake data prior to the 1990. Given the record of earthquakes post 1990, there is a need to revisit the design provisions of the code. This paper compares the base shear demand of three major cities in Bangladesh: Dhaka (the capital city), Sylhet, and Chittagong for earthquake scenarios of magnitudes 7.0MW, 7.5MW, 8.0MW and 8.5MW using a stochastic model. In particular, the stochastic model allows the flexibility to input region specific parameters such as shear wave velocity profile (that were developed from Global Crustal Model CRUST2.0) and include the effects of attenuation as individual components. Effects of soil amplification were analysed using the Extended Component Attenuation Model (ECAM). Results show that the estimated base shear demand is higher in comparison with code provisions leading to the suggestion of additional seismic design consideration in the study regions.

Keywords: attenuation, earthquake, ground motion, Stochastic, seismic hazard

Procedia PDF Downloads 243
1607 Resilient Analysis as an Alternative to Conventional Seismic Analysis Methods for the Maintenance of a Socioeconomical Functionality of Structures

Authors: Sara Muhammad Elqudah, Vigh László Gergely

Abstract:

Catastrophic events, such as earthquakes, are sudden, short, and devastating, threatening lives, demolishing futures, and causing huge economic losses. Current seismic analyses and design standards are based on life safety levels where only some residual strength and stiffness are left in the structure leaving it beyond economical repair. Consequently, it has become necessary to introduce and implement the concept of resilient design. Resilient design is about designing for ductility over time by resisting, absorbing, and recovering from the effects of a hazard in an appropriate and efficient time manner while maintaining the functionality of the structure in the aftermath of the incident. Resilient analysis is mainly based on the fragility, vulnerability, and functionality curves where eventually a resilience index is generated from these curves, and the higher this index is, the better is the performance of the structure. In this paper, seismic performances of a simple two story reinforced concrete building, located in a moderate seismic region, has been evaluated using the conventional seismic analyses methods, which are the linear static analysis, the response spectrum analysis, and the pushover analysis, and the generated results of these analyses methods are compared to those of the resilient analysis. Results highlight that the resilience analysis was the most convenient method in generating a more ductile and functional structure from a socio-economic perspective, in comparison to the standard seismic analysis methods.

Keywords: conventional analysis methods, functionality, resilient analysis, seismic performance

Procedia PDF Downloads 106
1606 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators

Authors: Fathi Abid, Bilel Kaffel

Abstract:

The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.

Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode

Procedia PDF Downloads 334
1605 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 29
1604 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 181
1603 Evaluation of Food Services by the Personnel in Hospitals of Athens, Greece

Authors: I. Mentziou, C. Delezos, D. Krikidis, A. Nestoridou, G. Boskou

Abstract:

Introduction: The systems of production and distribution of meals can have a significant impact on the food intake of hospital patients who are likely to develop malnutrition. In hospitals, the consequences of food borne infections can range from annoying to life-threatening for a patient, since they can lead up to death in vulnerable groups Aim: The aim of the present study was the evaluation of food safety management systems implementation, as well as the general evaluation of the total quality management systems in Greek hospitals. Methods: This is a multifocal study on the implementation and evaluation of the food safety management systems in the Greek hospitals of Attica region. Eleven hospitals from the city of Athens were chosen for this purpose. The sample was derived from the high rank personnel of the nutritional department (dietician, head-chef, food technologist, public health inspector). Tailor made questionnaires on hygiene regulations were used as tools for the interviews. Results: Overall, 30 employees in the field of hospital nutrition participated. Most of the replies implied that almost always the hygiene regulations are implemented. Nevertheless, only 30% stated that there is a Hazard Analysis Critical Control Points HACCP system (HACCP) in the hospital. In a small number of questionnaires there were proposals for changes by the staff. Conclusion: Measurement of the opinion of the personnel about the provided food services within a hospital can further lead to continuous improvement of the hospital nutrition.

Keywords: evaluation, food service, HACCP, hospital, personnel

Procedia PDF Downloads 370
1602 Seismic Integrity Determination of Dams in Urban Areas

Authors: J. M. Mayoral, M. Anaya

Abstract:

The urban and economic development of cities demands the construction of water use and flood control infrastructure. Likewise, it is necessary to determine the safety level of the structures built with the current standards and if it is necessary to define the reinforcement actions. The foregoing is even more important in structures of great importance, such as dams, since they imply a greater risk for the population in case of failure or undesirable operating conditions (e.g., seepage, cracks, subsidence). This article presents a methodology for determining the seismic integrity of dams in urban areas. From direct measurements of the dynamic properties using geophysical exploration and ambient seismic noise measurements, the seismic integrity of the concrete-faced rockfill dam selected as a case of study is evaluated. To validate the results, two accelerometer stations were installed (e.g., free field and crest of the dam). Once the dynamic properties were determined, three-dimensional finite difference models were developed to evaluate the dam seismic performance for different intensities of movement, considering the site response and soil-structure interaction effects. The seismic environment was determined from the uniform hazard spectra for several return periods. Based on the results obtained, the safety level of the dam against different seismic actions was determined, and the effectiveness of ambient seismic noise measurements in dynamic characterization and subsequent evaluation of the seismic integrity of urban dams was evaluated.

Keywords: risk, seismic, soil-structure interaction, urban dams

Procedia PDF Downloads 108
1601 Analysis of Ferroresonant Overvoltages in Cable-fed Transformers

Authors: George Eduful, Ebenezer A. Jackson, Kingsford A. Atanga

Abstract:

This paper investigates the impacts of cable length and capacity of transformer on ferroresonant overvoltage in cable-fed transformers. The study was conducted by simulation using the EMTP RV. Results show that ferroresonance can cause dangerous overvoltages ranging from 2 to 5 per unit. These overvoltages impose stress on insulations of transformers and cables and subsequently result in system failures. Undertaking Basic Multiple Regression Analysis (BMR) on the results obtained, a statistical model was obtained in terms of cable length and transformer capacity. The model is useful for ferroresonant prediction and control in cable-fed transformers.

Keywords: ferroresonance, cable-fed transformers, EMTP RV, regression analysis

Procedia PDF Downloads 527
1600 Application of ANN and Fuzzy Logic Algorithms for Runoff and Sediment Yield Modelling of Kal River, India

Authors: Mahesh Kothari, K. D. Gharde

Abstract:

The ANN and fuzzy logic (FL) models were developed to predict the runoff and sediment yield for catchment of Kal river, India using 21 years (1991 to 2011) rainfall and other hydrological data (evaporation, temperature and streamflow lag by one and two day) and 7 years data for sediment yield modelling. The ANN model performance improved with increasing the input vectors. The fuzzy logic model was performing with R value more than 0.95 during developmental stage and validation stage. The comparatively FL model found to be performing well to ANN in prediction of runoff and sediment yield for Kal river.

Keywords: transferred function, sigmoid, backpropagation, membership function, defuzzification

Procedia PDF Downloads 561
1599 Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

Authors: Yoshio Kurosawa, Takao Yamaguchi

Abstract:

High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy for designers. In this report, the outline of this tool and an analysis example applied to floor mat are introduced.

Keywords: automobile, acoustics, porous material, transfer matrix method

Procedia PDF Downloads 506
1598 Feasibility of Building Structure Due to Decreased Concrete Quality of School Building in Banda Aceh City 19 Years after Tsunami

Authors: Rifqi Irvansyah, Abdullah Abdullah, Yunita Idris, Bunga Raihanda

Abstract:

Banda Aceh is particularly susceptible to heightened vulnerability during natural disasters due to its concentrated exposure to multi-hazard risks. Despite urgent priorities during the aftermath of natural disasters, such as the 2004 Indian Ocean earthquake and tsunami, several public facilities, including school buildings, sustained damage yet continued operations without adequate repairs, even though they were submerged by the tsunami. This research aims to evaluate the consequences of column damage induced by tsunami inundation on the structural integrity of buildings. The investigation employs interaction diagrams for columns to assess their capacity, taking into account factors such as rebar deterioration and corrosion. The analysis result shows that one-fourth of the K1 columns on the first floor fall outside of the column interaction diagram, indicating that the column structure cannot handle the load above it, as evidenced by the presence of Pu and Mu, which are greater than the column's design strength. This suggests that the five columns of K1 should be cause for concern, as the column's capacity is decreasing. These results indicate that the structure of the building cannot sustain the applied load because the column cross-section has deteriorated. In contrast, all K2 columns meet the design strength, indicating that the column structure can withstand the structural loads.

Keywords: tsunami inundation, column damage, column interaction diagram, mitigation effort

Procedia PDF Downloads 62
1597 Application of Neural Network on the Loading of Copper onto Clinoptilolite

Authors: John Kabuba

Abstract:

The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ion-exchange.

Keywords: clinoptilolite, loading, modeling, neural network

Procedia PDF Downloads 413
1596 Timing and Probability of Presurgical Teledermatology: Survival Analysis

Authors: Felipa de Mello-Sampayo

Abstract:

The aim of this study is to undertake, from patient’s perspective, the timing and probability of using teledermatology, comparing it with a conventional referral system. The dynamic stochastic model’s main value-added consists of the concrete application to patients waiting for dermatology surgical intervention. Patients with low health level uncertainty must use teledermatology treatment as soon as possible, which is precisely when the teledermatology is least valuable. The results of the model were then tested empirically with the teledermatology network covering the area served by the Hospital Garcia da Horta, Portugal, links the primary care centers of 24 health districts with the hospital’s dermatology department via the corporate intranet of the Portuguese healthcare system. Health level volatility can be understood as the hazard of developing skin cancer and the trend of health level as the bias of developing skin lesions. The results of the survival analysis suggest that the theoretical model can explain the use of teledermatology. It depends negatively on the volatility of patients' health, and positively on the trend of health, i.e., the lower the risk of developing skin cancer and the younger the patients, the more presurgical teledermatology one expects to occur. Presurgical teledermatology also depends positively on out-of-pocket expenses and negatively on the opportunity costs of teledermatology, i.e., the lower the benefit missed by using teledermatology, the more presurgical teledermatology one expects to occur.

Keywords: teledermatology, wait time, uncertainty, opportunity cost, survival analysis

Procedia PDF Downloads 122
1595 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 74
1594 Makhraj Recognition Using Convolutional Neural Network

Authors: Zan Azma Nasruddin, Irwan Mazlin, Nor Aziah Daud, Fauziah Redzuan, Fariza Hanis Abdul Razak

Abstract:

This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.

Keywords: convolutional neural network, Makhraj recognition, speech recognition, signal processing, tensorflow

Procedia PDF Downloads 329
1593 A Comparison of Smoothing Spline Method and Penalized Spline Regression Method Based on Nonparametric Regression Model

Authors: Autcha Araveeporn

Abstract:

This paper presents a study about a nonparametric regression model consisting of a smoothing spline method and a penalized spline regression method. We also compare the techniques used for estimation and prediction of nonparametric regression model. We tried both methods with crude oil prices in dollars per barrel and the Stock Exchange of Thailand (SET) index. According to the results, it is concluded that smoothing spline method performs better than that of penalized spline regression method.

Keywords: nonparametric regression model, penalized spline regression method, smoothing spline method, Stock Exchange of Thailand (SET)

Procedia PDF Downloads 431
1592 Rheological Modeling for Shape-Memory Thermoplastic Polymers

Authors: H. Hosseini, B. V. Berdyshev, I. Iskopintsev

Abstract:

This paper presents a rheological model for producing shape-memory thermoplastic polymers. Shape-memory occurs as a result of internal rearrangement of the structural elements of a polymer. A non-linear viscoelastic model was developed that allows qualitative and quantitative prediction of the stress-strain behavior of shape-memory polymers during heating. This research was done to develop a technique to determine the maximum possible change in size of heat-shrinkable products during heating. The rheological model used in this work was particularly suitable for defining process parameters and constructive parameters of the processing equipment.

Keywords: elastic deformation, heating, shape-memory polymers, stress-strain behavior, viscoelastic model

Procedia PDF Downloads 315
1591 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 242