Search results for: accidents predictions
246 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity
Authors: Maxim Glushenkov, Alexander Kronberg
Abstract:
Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery
Procedia PDF Downloads 226245 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 240244 Impacts of Climate Elements on the Annual Periodic Behavior of the Shallow Groundwater Level: Case Study from Central-Eastern Europe
Authors: Tamas Garamhegyi, Jozsef Kovacs, Rita Pongracz, Peter Tanos, Balazs Trasy, Norbert Magyar, Istvan G. Hatvani
Abstract:
Like most environmental processes, shallow groundwater fluctuation under natural circumstances also behaves periodically. With the statistical tools at hand, it can easily be determined if a period exists in the data or not. Thus, the question may be raised: Does the estimated average period time characterize the whole time period, or not? This is especially important in the case of such complex phenomena as shallow groundwater fluctuation, driven by numerous factors. Because of the continuous changes in the oscillating components of shallow groundwater time series, the most appropriate method should be used to investigate its periodicity, this is wavelet spectrum analysis. The aims of the research were to investigate the periodic behavior of the shallow groundwater time series of an agriculturally important and drought sensitive region in Central-Eastern Europe and its relationship to the European pressure action centers. During the research ~216 shallow groundwater observation wells located in the eastern part of the Great Hungarian Plain with a temporal coverage of 50 years were scanned for periodicity. By taking the full-time interval as 100%, the presence of any period could be determined in percentages. With the complex hydrogeological/meteorological model developed in this study, non-periodic time intervals were found in the shallow groundwater levels. On the local scale, this phenomenon linked to drought conditions, and on a regional scale linked to the maxima of the regional air pressures in the Gulf of Genoa. The study documented an important link between shallow groundwater levels and climate variables/indices facilitating the necessary adaptation strategies on national and/or regional scales, which have to take into account the predictions of drought-related climatic conditions.Keywords: climate change, drought, groundwater periodicity, wavelet spectrum and coherence analyses
Procedia PDF Downloads 385243 The Influence of Environmental Attributes on Children's Pedestrian-Crash Risk in School Zones
Authors: Jeongwoo Lee
Abstract:
Children are the most vulnerable travelers and they are at risk for pedestrian injury. Creating a safe route to school is important because walking to school is one of the main opportunities for promotion of needed physical exercise among children. This study examined how the built environmental attributes near an elementary school influence traffic accidents among school-aged children. The study used two complementary data sources including the locations of police-reported pedestrian crashes and the built environmental characteristics of school areas. The environmental attributes of road segments were collected through GIS measurements of local data and actual site audits using the inventory developed for measuring pedestrian-crash risk scores. The inventory data collected at 840 road segments near 32 elementary schools in the city of Ulsan. We observed all segments in a 300-meter-radius area from the entrance of an elementary school. Segments are street block faces. The inventory included 50 items, organized into four domains: accessibility (17items), pleasurability (11items), perceived safety from traffic (9items), and traffic and land-use measures (13items). Elementary schools were categorized into two groups based on the distribution of the pedestrian-crash hazard index scores. A high pedestrian-crash zone was defined as an school area within the eighth, ninth, and tenth deciles, while no pedestrian-crash zone was defined as a school zone with no pedestrian-crash accident among school-aged children between 2013 and 2016. No- and high pedestrian-crash zones were compared to determine whether different settings of the built environment near the school lead to a different rate of pedestrian-crash incidents. The results showed that a crash risk can be influenced by several environmental factors such as a shape of school-route, number of intersections, visibility and land-use in a street, and a type of sidewalk. The findings inform policy for creating safe routes to school to reduce the pedestrian-crash risk among children by focusing on school zones.Keywords: active school travel, school zone, pedestrian crash, safety route to school
Procedia PDF Downloads 245242 Prevalence of Dengue in Sickle Cell Disease in Pre-school Children
Authors: Nikhil A. Gavhane, Sachin Shah, Ishant S. Mahajan, Pawan D. Bahekar
Abstract:
Introduction: Millions of people are affected with dengue fever every year, which drives up healthcare expenses in many low-income countries. Organ failure and other serious symptoms may result. Another worldwide public health problem is sickle cell anaemia, which is most prevalent in Africa, the Caribbean, and Europe. Dengue epidemics have reportedly occurred in locations with a high frequency of sickle cell disease, compounding the health problems in these areas. Aims and Objectives: This study examines dengue infection in sickle cell disease-afflicted pre-schoolers. Method:This Retrospective cohort study examined paediatric patients. Young people with sickle cell disease (SCD), dengue infection, and a control group without SCD or dengue were studied. Data on demographics, SCD consequences, medical treatments, and laboratory findings were gathered to analyse the influence of SCD on dengue severity and clinical outcomes, classified as severe or non-severe by the 2009 WHO classification. Using fever or admission symptoms, the research estimated acute illness duration. Result: Table 1 compares haemoglobin genotype-based dengue episode features in SS, SC, and controls. Table 2 shows that severe dengue cases are older, have longer admission delays, and have particular symptoms. Table 3's multivariate analysis indicates SS genotype's high connection with severe dengue, multiorgan failure, and acute pulmonary problems. Table 4 relates severe dengue to greater white blood cell counts, anaemia, liver enzymes, and reduced lactate dehydrogenase. Conclusion: This study is valuable but confined to hospitalised dengue patients with sickle cell illness. Small cohorts limit comparisons. Further study is needed since findings contradict predictions.Keywords: dengue, chills, headache, severe myalgia, vomiting, nausea, prostration
Procedia PDF Downloads 72241 Variations in Heat and Cold Waves over Southern India
Authors: Amit G. Dhorde
Abstract:
It is now well established that the global surface air temperatures have increased significantly during the period that followed the industrial revolution. One of the main predictions of climate change is that the occurrences of extreme weather events will increase in future. In many regions of the world, high-temperature extremes have already started occurring with rising frequency. The main objective of the present study is to understand spatial and temporal changes in days with heat and cold wave conditions over southern India. The study area includes the region of India that lies to the south of Tropic of Cancer. To fulfill the objective, daily maximum and minimum temperature data for 80 stations were collected for the period 1969-2006 from National Data Center of India Meteorological Department. After assessing the homogeneity of data, 62 stations were finally selected for the study. Heat and cold waves were classified as slight, moderate and severe based on the criteria given by Indias' meteorological department. For every year, numbers of days experiencing heat and cold wave conditions were computed. This data was analyzed with linear regression to find any existing trend. Further, the time period was divided into four decades to investigate the decadal frequency of the occurrence of heat and cold waves. The results revealed that the average annual temperature over southern India shows an increasing trend, which signifies warming over this area. Further, slight cold waves during winter season have been decreasing at the majority of the stations. The moderate cold waves also show a similar pattern at the majority of the stations. This is an indication of warming winters over the region. Besides this analysis, other extreme indices were also analyzed such as extremely hot days, hot days, very cold nights, cold nights, etc. This analysis revealed that nights are becoming warmer and days are getting warmer over some regions too.Keywords: heat wave, cold wave, southern India, decadal frequency
Procedia PDF Downloads 128240 Assessment of Implementation of the Health and Safety Contents of the Nigerian Factories Act by Small and Medium Scale Industries in Anambra State, Nigeria
Authors: Vivian Uchechi Okpala
Abstract:
Background: Millions of workers die every year as a result of occupational hazards, accidents and injuries, which are as a result of non- compliance to the laws or legislations guiding the health, safety and welfare of workers in the industries. This and many more lead to the assessment of implementation of the health and safety contents of the Nigerian Factories Act (NFA) by small and medium scale industries in Anambra State. Objectives: The study is aimed at achieving the following specific objectives; to assess the extent of implementation of Part-II Health and Part -III Safety (General Provisions), implementation of Part II Health and Part -III Safety (General Provisions Nigerian Factories Acts based on the age of the industries, locations of the industries and level of education of the workers of the small and medium scale industries Methods: the research design that was used for this study was descriptive survey research design, Area of this study was Anambra state, The population for this study comprised 180 chairmen/presidents of union workers of manufacturing industries in Anambra State, The instrument used for this study was structured questionnaire titled ‘assessment of implementation of NFA health and safety contents by small and medium scale industries, results: From the analysis, the following findings were made: Results: The medium scale industries implemented the Part-II Health and Part III Safety (General provisions) better than the small scale industries in Anambra state, the age of the industries, location of the industries and the level of education of the workers in the industries significantly influenced the implementation of the Part III Safety (General Provisions) of NFA, the location of the industries significantly influenced the implementation of the Part II-Health (General Provisions) of NFA. Conclusion: there was generally a certain level of implementation of the factories Act, there is need for more improvement, strict inspection by the regulatory agencies. Implications of the study were highlighted and several suggestions for further studies were made. Based on the findings, several recommendations were made including that the Ministry of Labour and Productivity and the Ministry of Health should strengthen planned information, strict policies to sanction the offenders. Keywords: Occupational Health and Safety, Nigerian Factories ActKeywords: occupational health and safety, Nigerian factories act, workers, welfare
Procedia PDF Downloads 140239 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 69238 Good Practices for Model Structure Development and Managing Structural Uncertainty in Decision Making
Authors: Hossein Afzali
Abstract:
Increasingly, decision analytic models are used to inform decisions about whether or not to publicly fund new health technologies. It is well noted that the accuracy of model predictions is strongly influenced by the appropriateness of model structuring. However, there is relatively inadequate methodological guidance surrounding this issue in guidelines developed by national funding bodies such as the Australian Pharmaceutical Benefits Advisory Committee (PBAC) and The National Institute for Health and Care Excellence (NICE) in the UK. This presentation aims to discuss issues around model structuring within decision making with a focus on (1) the need for a transparent and evidence-based model structuring process to inform the most appropriate set of structural aspects as the base case analysis; (2) the need to characterise structural uncertainty (If there exist alternative plausible structural assumptions (or judgements), there is a need to appropriately characterise the related structural uncertainty). The presentation will provide an opportunity to share ideas and experiences on how the guidelines developed by national funding bodies address the above issues and identify areas for further improvements. First, a review and analysis of the literature and guidelines developed by PBAC and NICE will be provided. Then, it will be discussed how the issues around model structuring (including structural uncertainty) are not handled and justified in a systematic way within the decision-making process, its potential impact on the quality of public funding decisions, and how it should be presented in submissions to national funding bodies. This presentation represents a contribution to the good modelling practice within the decision-making process. Although the presentation focuses on the PBAC and NICE guidelines, the discussion can be applied more widely to many other national funding bodies that use economic evaluation to inform funding decisions but do not transparently address model structuring issues e.g. the Medical Services Advisory Committee (MSAC) in Australia or the Canadian Agency for Drugs and Technologies in Health.Keywords: decision-making process, economic evaluation, good modelling practice, structural uncertainty
Procedia PDF Downloads 184237 Carbon Sequestration Modeling in the Implementation of REDD+ Programmes in Nigeria
Authors: Oluwafemi Samuel Oyamakin
Abstract:
The forest in Nigeria is currently estimated to extend to around 9.6 million hectares, but used to expand over central and southern Nigeria decades ago. The forest estate is shrinking due to long-term human exploitation for agricultural development, fuel wood demand, uncontrolled forest harvesting and urbanization, amongst other factors, compounded by population growth in rural areas. Nigeria has lost more than 50% of its forest cover since 1990 and currently less than 10% of the country is forested. The current deforestation rate is estimated at 3.7%, which is one of the highest in the world. Reducing Emissions from Deforestation and forest Degradation plus conservation, sustainable management of forests and enhancement of forest carbon stocks constituted what is referred to as REDD+. This study evaluated some of the existing way of computing carbon stocks using eight indigenous tree species like Mansonia, Shorea, Bombax, Terminalia superba, Khaya grandifolia, Khaya senegalenses, Pines and Gmelina arborea. While these components are the essential elements of REDD+ programme, they can be brought under a broader framework of systems analysis designed to arrive at optimal solutions for future predictions through statistical distribution pattern of carbon sequestrated by various species of tree. Available data on height and diameter of trees in Ibadan were studied and their respective potentials of carbon sequestration level were assessed and subjected to tests so as to determine the best statistical distribution that would describe the carbon sequestration pattern of trees. The result of this study suggests a reasonable statistical distribution for carbons sequestered in simulation studies and hence, allow planners and government in determining resources forecast for sustainable development especially where experiments with real-life systems are infeasible. Sustainable management of forest can then be achieved by projecting future condition of forests under different management regimes thereby supporting conservation and REDD+ programmes in Nigeria.Keywords: REDD+, carbon, climate change, height and diameter
Procedia PDF Downloads 166236 A Comprehensive Approach to Scour Depth Estimation Through HEC-RAS 2D and Physical Modeling
Authors: Ashvinie Thembiliyagoda, Kasun De Silva, Nimal Wijayaratna
Abstract:
The lowering of the riverbed level as a result of water erosion is termed as scouring. This phenomenon remarkably undermines the potential stability of the bridge pier, causing a threat of failure or collapse. The formation of vortices in the vicinity of bridges due to the obstruction caused by river flow is the main reason behind this pursuit. Scouring is aggravated by factors including high flow rates, bridge pier geometry, sediment configuration etc. Tackling scour-related problems when they become severe is more costly and disruptive compared to implementing preventive measures based on predicted scour depths. This paper presents a comprehensive investigation of the development of a numerical model that could reproduce the scouring effect around bridge piers and estimate the scour depth. The numerical model was developed for one selected bridge in Sri Lanka, the Kelanisiri Bridge. HEC-RAS two-dimensional (2D) modeling approach was utilized for the development of the model and was calibrated and validated with field data. To further enhance the reliability of the model, a physical model was developed, allowing for additional validation. Results from the numerical model were compared with those obtained from the physical model, revealing a strong correlation between the two methods and confirming the numerical model's accuracy in predicting scour depths. The findings from this study underscore the ability of the HEC-RAS two-dimensional modeling approach for the estimation of scour depth around bridge piers. The developed model is able to estimate the scour depth under varying flow conditions, and its flexibility allows it to be adapted for application to other bridges with similar hydraulic and geomorphological conditions, providing a robust tool for widespread use in scour estimation. The developed two-dimensional model not only offers reliable predictions for the case study bridge but also holds significant potential for broader implementation, contributing to the improved design and maintenance of bridge structures in diverse environments.Keywords: piers, scouring, HEC-RAS, physical model
Procedia PDF Downloads 13235 On Cold Roll Bonding of Polymeric Films
Authors: Nikhil Padhye
Abstract:
Recently a new phenomenon for bonding of polymeric films in solid-state, at ambient temperatures well below the glass transition temperature of the polymer, has been reported. This is achieved by bulk plastic compression of polymeric films held in contact. Here we analyze the process of cold-rolling of polymeric films via finite element simulations and illustrate a flexible and modular experimental rolling-apparatus that can achieve bonding of polymeric films through cold-rolling. Firstly, the classical theory of rolling a rigid-plastic thin-strip is utilized to estimate various deformation fields such as strain-rates, velocities, loads etc. in rolling the polymeric films at the specified feed-rates and desired levels of thickness-reduction(s). Predicted magnitudes of slow strain-rates, particularly at ambient temperatures during rolling, and moderate levels of plastic deformation (at which Bauschinger effect can be neglected for the particular class of polymeric materials studied here), greatly simplifies the task of material modeling and allows us to deploy a computationally efficient, yet accurate, finite deformation rate-independent elastic-plastic material behavior model (with inclusion of isotropic-hardening) for analyzing the rolling of these polymeric films. The interfacial behavior between the roller and polymer surfaces is modeled using Coulombic friction; consistent with the rate-independent behavior. The finite deformation elastic-plastic material behavior based on (i) the additive decomposition of stretching tensor (D = De + Dp, i.e. a hypoelastic formulation) with incrementally objective time integration and, (ii) multiplicative decomposition of deformation gradient (F = FeFp) into elastic and plastic parts, are programmed and carried out for cold-rolling within ABAQUS Explicit. Predictions from both the formulations, i.e., hypoelastic and multiplicative decomposition, exhibit a close match. We find that no specialized hyperlastic/visco-plastic model is required to describe the behavior of the blend of polymeric films, under the conditions described here, thereby speeding up the computation process .Keywords: Polymer Plasticity, Bonding, Deformation Induced Mobility, Rolling
Procedia PDF Downloads 189234 The Application of Collision Damage Analysis in Reconstruction of Sedan-Scooter Accidents
Authors: Chun-Liang Wu, Kai-Ping Shaw, Cheng-Ping Yu, Wu-Chien Chien, Hsiao-Ting Chen, Shao-Huang Wu
Abstract:
Objective: This study analyzed three criminal judicial cases. We applied the damage analysis of the two vehicles to verify other evidence, such as dashboard camera records of each accident, reconstruct the scenes, and pursue the truth. Methods: Evidence analysis, the method is to collect evidence and the reason for the results in judicial procedures, then analyze the involved damage evidence to verify other evidence. The collision damage analysis method is to inspect the damage to the vehicles and utilize the principles of tool mark analysis, Newtonian physics, and vehicle structure to understand the relevant factors when the vehicles collide. Results: Case 1: Sedan A turned right at the T junction and collided with Scooter B, which was going straight on the left road. The dashboard camera records showed that the left side of Sedan A’s front bumper collided with the body of Scooter B and rider B. After the analysis of the study, the truth was that the front of the left side of Sedan A impacted the right pedal of Scooter B and the right lower limb of rider B. Case 2: Sedan C collided with Scooter D on the left road at the crossroads. The dashboard camera record showed that the left side of the Sedan C’s front bumper collided with the body of Scooter D and rider D. After the analysis of the study, the truth was that the left side of the Sedan C impacted the left side of the car body and the front wheel of Scooter D and rider D. Case 3: Sedan E collided with Scooter F on the right road at the crossroads. The dashboard camera record showed that the right side of the Sedan E’s front bumper collided with the body of Scooter F and rider F. After the analysis of the study, the truth was that the right side of the front bumper and the right side of the Sedan F impacted the Scooter. Conclusion: The application of collision damage analysis in the reconstruction of a sedan-scooter collision could discover the truth and provide the basis for judicial justice. The cases and methods could be the reference for the road safety policy.Keywords: evidence analysis, collision damage analysis, accident reconstruction, sedan-scooter collision, dashboard camera records
Procedia PDF Downloads 78233 BiFormerDTA: Structural Embedding of Protein in Drug Target Affinity Prediction Using BiFormer
Authors: Leila Baghaarabani, Parvin Razzaghi, Mennatolla Magdy Mostafa, Ahmad Albaqsami, Al Warith Al Rushaidi, Masoud Al Rawahi
Abstract:
Predicting the interaction between drugs and their molecular targets is pivotal for advancing drug development processes. Due to the time and cost limitations, computational approaches have emerged as an effective approach to drug-target interaction (DTI) prediction. Most of the introduced computational based approaches utilize the drug molecule and protein sequence as input. This study does not only utilize these inputs, it also introduces a protein representation developed using a masked protein language model. In this representation, for every individual amino acid residue within the protein sequence, there exists a corresponding probability distribution that indicates the likelihood of each amino acid being present at that particular position. Then, the similarity between each pair of amino acids is computed to create a similarity matrix. To encode the knowledge of the similarity matrix, Bi-Level Routing Attention (BiFormer) is utilized, which combines aspects of transformer-based models with protein sequence analysis and represents a significant advancement in the field of drug-protein interaction prediction. BiFormer has the ability to pinpoint the most effective regions of the protein sequence that are responsible for facilitating interactions between the protein and drugs, thereby enhancing the understanding of these critical interactions. Thus, it appears promising in its ability to capture the local structural relationship of the proteins by enhancing the understanding of how it contributes to drugprotein interactions, thereby facilitating more accurate predictions. To evaluate the proposed method, it was tested on two widely recognized datasets: Davis and KIBA. A comprehensive series of experiments was conducted to illustrate its effectiveness in comparison to cutting edge techniques.Keywords: BiFormer, transformer, protein language processing, self-attention mechanism, binding affinity, drug target interaction, similarity matrix, protein masked representation, protein language model
Procedia PDF Downloads 7232 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation
Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov
Abstract:
The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations
Procedia PDF Downloads 156231 Pipeline Integrity Management of Buried Oil and Gas Transmission Pipelines in Libya Through Corrosion Management
Authors: Iftikhar Ahmad
Abstract:
Buried pipeline is an underground structure that is buried in certain depth of soil and surrounded by soil medium. It has become the main mode of transportation of oil and gas from production facilities to refineries and export terminals due to its low cost, fast construction speed and large transportation capacity. Poor integrity is one of the major causes of leaks and accidents in oil and gas transmission pipelines. To ensure safe operation and to keep pipeline in a fit-for-service condition, it is imperative to have an efficient and effective pipeline integrity management (PIM) system. The remaining life of the pipeline can also be extended in the most reliable, safe and cost-effective manner by implementing effective pipeline integrity management. The importance of effective pipeline integrity management increases as the pipeline infrastructure continues to age. The pipelines in Libya, which are typically made of steel are susceptible to corrosion. The corrosion can cause failure of pipeline and significant safety and environmental hazards. To address corrosion in oil and gas pipeline, several corrosion management strategies can be employed. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. Libya is a North African country and its economy is based on petroleum industry. It has large network of pipelines. This paper describes the pipeline integrity management system used in the Libyan oilfields to protect pipeline facilities based on standard practices of corrosion mitigation and inspection. An effective integrity management program anticipates and mitigates or eliminate integrity issues before they lead to incidents or failures. Understanding the pipeline’s integrity and threats in the context of the surrounding environment is key to making informed integrity management decisions. The following elements are developed for the operational phase to ensure that adequate management practices are in place to assess failures, and manage and respond to emergencies: (a) Failure assessment plan; (b) Emergency response plan; and (c) Remaining life assessment plan. Fifty performance indicators, which were previously identified, were adopted to track the success of the implementation of corrosion control strategies used in Libyan petroleum industry for its oil and gas transmission pipelines.Keywords: pipeline integrity management, buried pipeline integrity management, corrosion management in oil and gas pipelines, corrosion mitigation and inspection
Procedia PDF Downloads 13230 Assessment of Time-variant Work Stress for Human Error Prevention
Authors: Hyeon-Kyo Lim, Tong-Il Jang, Yong-Hee Lee
Abstract:
For an operator in a nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. The possibility of human errors may be low, but the risk of them would be unimaginably enormous. Thus, for accident prevention, it is quite indispensable to analyze the influence of any factors which may raise the possibility of human errors. During the past decades, not a few research results showed that performance of human operators may vary over time due to lots of factors. Among them, stress is known to be an indirect factor that may cause human errors and result in mental illness. Until now, not a few assessment tools have been developed to assess stress level of human workers. However, it still is questionable to utilize them for human performance anticipation which is related with human error possibility, because they were mainly developed from the viewpoint of mental health rather than industrial safety. Stress level of a person may go up or down with work time. In that sense, if they would be applicable in the safety aspect, they should be able to assess the variation resulted from work time at least. Therefore, this study aimed to compare their applicability for safety purpose. More than 10 kinds of work stress tools were analyzed with reference to assessment items, assessment and analysis methods, and follow-up measures which are known to close related factors with work stress. The results showed that most tools mainly focused their weights on some common organizational factors such as demands, supports, and relationships, in sequence. Their weights were broadly similar. However, they failed to recommend practical solutions. Instead, they merely advised to set up overall counterplans in PDCA cycle or risk management activities which would be far from practical human error prevention. Thus, it was concluded that application of stress assessment tools mainly developed for mental health seemed to be impractical for safety purpose with respect to human performance anticipation, and that development of a new assessment tools would be inevitable if anyone wants to assess stress level in the aspect of human performance variation and accident prevention. As a consequence, as practical counterplans, this study proposed a new scheme for assessment of work stress level of a human operator that may vary over work time which is closely related with the possibility of human errors.Keywords: human error, human performance, work stress, assessment tool, time-variant, accident prevention
Procedia PDF Downloads 670229 The Impact of Informal Care on Health Behavior among Older People with Chronic Diseases: A Study in China Using Propensity Score Matching
Abstract:
Improvement of health behavior among people with chronic diseases is vital for increasing longevity and enhancing quality of life. This paper researched the causal effects of informal care on the compliance with doctor’s health advices – smoking control, dietetic regulation, weight control and keep exercising – among older people with chronic diseases in China, which is facing the challenge of aging. We addressed the selection bias by using propensity score matching in the estimation process. We used the 2011-2012 national baseline data of the China Health and Retirement Longitudinal Study. Our results showed informal care can help improve health behavior of older people. First, informal care improved the compliance of smoking controls: whether smoke, frequency of smoking, and the time lag between wake up and the first cigarette was all lower for these older people with informal care; Second, for dietetic regulation, older people with informal care had more meals every day than older people without informal care; Third, three variables: BMI, whether gain weight and whether lose weight were used to measure the outcome of weight control. There were no significant difference between group with informal care and that without for BMI and the possibility of losing weight. Older people with informal care had lower possibility of gain weight than that without; Last, for the advice of keeping exercising, informal care increased the probability of walking exercise, however, the difference between groups for moderate and vigorous exercise were not significant. Our results indicate policy makers who aim to decrease accidents should take informal care to elders into account and provide an appropriate policy to meet the demand of informal care. Our birth policy and postponed retirement policy may decrease the informal caregiving hours, so adjustments of these policies are important and urgent to meet the current situation of aged tendency of population. In addition, government could give more support to develop organizations to provide formal care, such as nursing home. We infer that formal care is also useful for health behavior improvements.Keywords: chronic diseases, compliance, CHARLS, health advice, informal care, older people, propensity score matching
Procedia PDF Downloads 405228 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis
Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab
Abstract:
Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.Keywords: deep neural network, foot disorder, plantar pressure, support vector machine
Procedia PDF Downloads 357227 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 168226 Understanding the Productivity Effect on Industrial Management: The Portuguese Wood Furniture Industry Case Study
Authors: Jonas A. R. H. Lima, Maria Antonia Carravilla
Abstract:
As productivity concepts are widely related to industrial savings, it is becoming particularly important in a more and more competitive world, to really understand how productivity can be well used in industrial management techniques. Nowadays, consumers are no more willing to pay for mistakes and inefficiencies. Therefore, one way for companies to stay competitive is to control and increase their productivity. This study aims to define clearly the productivity concept, understand how a company can affect productivity, and, if possible, identify the relation between each identified productivity factor. This will help managers, by clarifying the main issues behind productivity concepts and proposing a methodology to measure, control and increase productivity. The main questions to be answered are: what is the importance of productivity for the Portuguese Wood Furniture Industry? Is it possible to control productivity internally, or is it a phenomenon external to companies, hard or even impossible to control? How to understand, control and adjust productivity performance? How to make productivity to become one main asset for maximizing the use of the available resources? This essay will follow a constructive approach mostly based in the research hypothesis mentioned above. For that, a literature review is being done to find the main conceptual frameworks and empirical studies that already exist, and by doing so, highlight eventual knowledge or conflicting research to be addressed in this work. We expect to build theoretical explanations and test theoretical predictions from participants understandings and own experiences, by elaborating field surveys and interviews, to select adjusted productivity indicators and analyze the productivity evolution according the adjustments on other variables. Its intended the conduction of an exploratory work that can simultaneous clarify productivity concepts, objectives, and define frameworks. This investigation intends to migrate from merely academic concepts to a daily basis operational reality of the companies from the Portuguese Wood Furniture Industry highlighting productivity increased importance within modern engineering and industrial management. The ambition is to clarify, systemize and develop a management tool that may not only control but positively influence the way resources are used.Keywords: industrial management, motivation, productivity, performance indicators, reward management, wood furniture industry
Procedia PDF Downloads 229225 Phenology and Size in the Social Sweat Bee, Halictus ligatus, in an Urban Environment
Authors: Rachel A. Brant, Grace E. Kenny, Paige A. Muñiz, Gerardo R. Camilo
Abstract:
The social sweat bee, Halictus ligatus, has been documented to alter its phenology as a response to changes in temporal dynamics of resources. Furthermore, H. ligatus exhibits polyethism in natural environments as a consequence of the variation in resources. Yet, we do not know if or how H. ligatus responds to these variations in urban environments. As urban environments become much more widespread, and human population is expected to reach nine billion by 2050, it is crucial to distinguish how resources are allocated by bees in cities. We hypothesize that in urban regions, where floral availability varies with human activity, H. ligatus will exhibit polyethism in order to match the extremely localized spatial variability of resources. We predict that in an urban setting, where resources vary both spatially and temporally, the phenology of H. ligatus will alter in response to these fluctuations. This study was conducted in Saint Louis, Missouri, at fifteen sites each varying in size and management type (community garden, urban farm, prairie restoration). Bees were collected by hand netting from 2013-2016. Results suggest that the largest individuals, mostly gynes, occurred in lower income neighborhood community gardens in May and August. We used a model averaging procedure, based on information theoretical methods, to determine a best model for predicting bee size. Our results suggest that month and locality within the city are the best predictors of bee size. Halictus ligatus was observed to comply with the predictions of polyethism from 2013 to 2015. However, in 2016 there was an almost complete absence of the smallest worker castes. This is a significant deviation from what is expected under polyethism. This could be attributed to shifts in planting decisions, shifts in plant-pollinator matches, or local climatic conditions. Further research is needed to determine if this divergence from polyethism is a new strategy for the social sweat bee as climate continues to alter or a response to human dominated landscapes.Keywords: polyethism, urban environment, phenology, social sweat bee
Procedia PDF Downloads 219224 Inhibition of Echis ocellatus Venom Metalloprotease by Flavonoid-Rich Ethyl Acetate Sub-fraction of Moringa oleifera Leaves (Lam.): in vitro and in silico Approaches
Authors: Adeyi Akindele Oluwatosin, Mustapha Kaosarat Keji, Ajisebiola Babafemi Siji, Adeyi Olubisi Esther, Damilohun Samuel Metibemu, Raphael Emuebie Okonji
Abstract:
Envenoming by Echis ocellatus is potentially life-threatening due to severe hemorrhage, renal failure, and capillary leakage. These effects are attributed to snake venom metalloproteinases (SVMPs). Due to drawbacks in the use of antivenom, natural inhibitors from plants are of interest in studies of new antivenom treatment. Antagonizing effects of bioactive compounds of Moringa oleifera, a known antisnake plant, are yet to be tested against SVMPs of E. ocellatus (SVMP-EO). Ethanol crude extract of M. oleifera was partitioned using n-hexane and ethyl acetate. Each partition was fractionated using column chromatography and tested against SVMP-EO purified through ion-exchange chromatography with EchiTab-PLUS polyvalent anti-venom as control. Phytoconstituents of ethyl acetate fraction were screened against the catalytic site of crystal of BaP1-SVMP, while drug-likeness and ADMET toxicity of compound were equally determined. The molecular weight of isolated SVMP-EO was 43.28 kDa, with a specific activity of 245 U/ml, a percentage yield of 62.83 %, and a purification fold of 0.920. The Vmax and Km values are 2 mg/ml and 38.095 μmol/ml/min, respectively, while the optimal pH and temperature are 6.0 and 40°C, respectively. Polyvalent anti-venom, crude extract, and ethyl acetate fraction of M. oleifera exhibited a complete inhibitory effect against SVMP-EO activity. The inhibitions of the P-1 and P-II metalloprotease’s enzymes by the ethyl acetate fraction are largely due to methanol, 6, 8, 9-trimethyl-4-(2-phenylethyl)-3-oxabicyclo[3.3.1]non-6-en-1-yl)- and paroxypropione, respectively. Both compounds are potential drug candidates with little or no concern of toxicity, as revealed from the in-silico predictions. The inhibitory effects suggest that this compound might be a therapeutic candidate for further exploration for treatment of Ocellatus’ envenoming.Keywords: Echis ocellatus, Moringa oleifera, anti-venom, metalloproteases, snakebite, molecular docking
Procedia PDF Downloads 149223 Machine Learning in Agriculture: A Brief Review
Authors: Aishi Kundu, Elhan Raza
Abstract:
"Necessity is the mother of invention" - Rapid increase in the global human population has directed the agricultural domain toward machine learning. The basic need of human beings is considered to be food which can be satisfied through farming. Farming is one of the major revenue generators for the Indian economy. Agriculture is not only considered a source of employment but also fulfils humans’ basic needs. So, agriculture is considered to be the source of employment and a pillar of the economy in developing countries like India. This paper provides a brief review of the progress made in implementing Machine Learning in the agricultural sector. Accurate predictions are necessary at the right time to boost production and to aid the timely and systematic distribution of agricultural commodities to make their availability in the market faster and more effective. This paper includes a thorough analysis of various machine learning algorithms applied in different aspects of agriculture (crop management, soil management, water management, yield tracking, livestock management, etc.).Due to climate changes, crop production is affected. Machine learning can analyse the changing patterns and come up with a suitable approach to minimize loss and maximize yield. Machine Learning algorithms/ models (regression, support vector machines, bayesian models, artificial neural networks, decision trees, etc.) are used in smart agriculture to analyze and predict specific outcomes which can be vital in increasing the productivity of the Agricultural Food Industry. It is to demonstrate vividly agricultural works under machine learning to sensor data. Machine Learning is the ongoing technology benefitting farmers to improve gains in agriculture and minimize losses. This paper discusses how the irrigation and farming management systems evolve in real-time efficiently. Artificial Intelligence (AI) enabled programs to emerge with rich apprehension for the support of farmers with an immense examination of data.Keywords: machine Learning, artificial intelligence, crop management, precision farming, smart farming, pre-harvesting, harvesting, post-harvesting
Procedia PDF Downloads 104222 Prediction Model of Body Mass Index of Young Adult Students of Public Health Faculty of University of Indonesia
Authors: Yuwaratu Syafira, Wahyu K. Y. Putra, Kusharisupeni Djokosujono
Abstract:
Background/Objective: Body Mass Index (BMI) serves various purposes, including measuring the prevalence of obesity in a population, and also in formulating a patient’s diet at a hospital, and can be calculated with the equation = body weight (kg)/body height (m)². However, the BMI of an individual with difficulties in carrying their weight or standing up straight can not necessarily be measured. The aim of this study was to form a prediction model for the BMI of young adult students of Public Health Faculty of University of Indonesia. Subject/Method: This study used a cross sectional design, with a total sample of 132 respondents, consisted of 58 males and 74 females aged 21- 30. The dependent variable of this study was BMI, and the independent variables consisted of sex and anthropometric measurements, which included ulna length, arm length, tibia length, knee height, mid-upper arm circumference, and calf circumference. Anthropometric information was measured and recorded in a single sitting. Simple and multiple linear regression analysis were used to create the prediction equation for BMI. Results: The male respondents had an average BMI of 24.63 kg/m² and the female respondents had an average of 22.52 kg/m². A total of 17 variables were analysed for its correlation with BMI. Bivariate analysis showed the variable with the strongest correlation with BMI was Mid-Upper Arm Circumference/√Ulna Length (MUAC/√UL) (r = 0.926 for males and r = 0.886 for females). Furthermore, MUAC alone also has a very strong correlation with BMI (r = 0,913 for males and r = 0,877 for females). Prediction models formed from either MUAC/√UL or MUAC alone both produce highly accurate predictions of BMI. However, measuring MUAC/√UL is considered inconvenient, which may cause difficulties when applied on the field. Conclusion: The prediction model considered most ideal to estimate BMI is: Male BMI (kg/m²) = 1.109(MUAC (cm)) – 9.202 and Female BMI (kg/m²) = 0.236 + 0.825(MUAC (cm)), based on its high accuracy levels and the convenience of measuring MUAC on the field.Keywords: body mass index, mid-upper arm circumference, prediction model, ulna length
Procedia PDF Downloads 214221 Effects of Active Muscle Contraction in a Car Occupant in Whiplash Injury
Authors: Nisha Nandlal Sharma, Julaluk Carmai, Saiprasit Koetniyom, Bernd Markert
Abstract:
Whiplash Injuries are usually associated with car accidents. The sudden forward or backward jerk to head causes neck strain, which is the result of damage to the muscle or tendons. Neck pain and headaches are the two most common symptoms of whiplash. Symptoms of whiplash are commonly reported in studies but the Injury mechanism is poorly understood. Neck muscles are the most important factor to study the neck Injury. This study focuses on the development of finite element (FE) model of human neck muscle to study the whiplash injury mechanism and effect of active muscle contraction on occupant kinematics. A detailed study of Injury mechanism will promote development and evaluation of new safety systems in cars, hence reducing the occurrence of severe injuries to the occupant. In present study, an active human finite element (FE) model with 3D neck muscle model is developed. Neck muscle was modeled with a combination of solid tetrahedral elements and 1D beam elements. Muscle active properties were represented by beam elements whereas, passive properties by solid tetrahedral elements. To generate muscular force according to inputted activation levels, Hill-type muscle model was applied to beam elements. To simulate non-linear passive properties of muscle, solid elements were modeled with rubber/foam material model. Material properties were assigned from published experimental tests. Some important muscles were then inserted into THUMS (Total Human Model for Safety) 50th percentile male pedestrian model. To reduce the simulation time required, THUMS lower body parts were not included. Posterior to muscle insertion, THUMS was given a boundary conditions similar to experimental tests. The model was exposed to 4g and 7g rear impacts as these load impacts are close to low speed impacts causing whiplash. The effect of muscle activation level on occupant kinematics during whiplash was analyzed.Keywords: finite element model, muscle activation, neck muscle, whiplash injury prevention
Procedia PDF Downloads 357220 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction
Authors: C. S. Subhashini, H. L. Premaratne
Abstract:
Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.Keywords: landslides, influencing factors, neural network model, hidden markov model
Procedia PDF Downloads 384219 Tectogenesis Around Kalaat Es Senan, Northwest of Tunisia: Structural, Geophysical and Gravimetric Study
Authors: Amira Rjiba, Mohamed Ghanmi, Tahar Aifa, Achref Boulares
Abstract:
This study, involving the interpretation of geological outcrops data (structures, and lithostratigraphiec colones) and subsurface structures (seismic and gravimetric data) help us to identify and precise (i) the lithology of the sedimentary formations between the Aptian and the recent formations, (ii) to differentiate the sedimentary formations it from the salt-bearing Triassic (iii) and to specify the major structures though the tectonics effects having affected the region during its geological evolution. By placing our study area placed in the context of Tunisia, located on the southern margin of the Tethys show us through tectonic traces and structural analysis conducted, that this area was submitted during the Triassic perio at an active rifting triggered extensional tectonic events and extensive respectively in the Cretaceous and Paleogene. Lithostratigraphic correlations between outcrops and seismic data sets on those of six oil wells conducted in the region have allowed us to better understand the structural complexity and the role of different tectonic faults having contributed to the current configuration, and marked by the current rifts. Indeed, three directions of NW-SE faults, NNW-SSE to NS and NE-SW to EW had a major role in the genesis of folds and open ditches collapse of NW-SE direction. These results were complemented by seismic reflection data to clarify the geometry of the southern and western areas of Kalaa Khasba ditch. The eight selected seismic lines for this study allowed to characterize the main structures, with isochronous maps, contour and isovitesse of Serdj horizon that presents the main reservoir in the region. The line L2, keyed by the well 6, helped highlight the NW-SE compression that has resulted in persistent discrepancies widely identifiable in its lithostratigraphic column. The gravity survey has confirmed the extension of most of the accidents deep subsurface whose activity seems to go far. Gravimetry also reinforced seismic interpretation confirming, at the L2 well, that both SW and NE flank of the moat are two opposite faults and trace the boundaries of NNW-SSE direction graben whose sedimentation of Mio-Pliocene age and Quaternary.Keywords: graben, graben collapse, gravity, Kalat Es Senan, seismic, tectogenesis
Procedia PDF Downloads 367218 Structural Health Monitoring using Fibre Bragg Grating Sensors in Slab and Beams
Authors: Pierre van Tonder, Dinesh Muthoo, Kim twiname
Abstract:
Many existing and newly built structures are constructed on the design basis of the engineer and the workmanship of the construction company. However, when considering larger structures where more people are exposed to the building, its structural integrity is of great importance considering the safety of its occupants (Raghu, 2013). But how can the structural integrity of a building be monitored efficiently and effectively. This is where the fourth industrial revolution step in, and with minimal human interaction, data can be collected, analysed, and stored, which could also give an indication of any inconsistencies found in the data collected, this is where the Fibre Bragg Grating (FBG) monitoring system is introduced. This paper illustrates how data can be collected and converted to develop stress – strain behaviour and to produce bending moment diagrams for the utilisation and prediction of the structure’s integrity. Embedded fibre optic sensors were used in this study– fibre Bragg grating sensors in particular. The procedure entailed making use of the shift in wavelength demodulation technique and an inscription process of the phase mask technique. The fibre optic sensors considered in this report were photosensitive and embedded in the slab and beams for data collection and analysis. Two sets of fibre cables have been inserted, one purposely to collect temperature recordings and the other to collect strain and temperature. The data was collected over a time period and analysed used to produce bending moment diagrams to make predictions of the structure’s integrity. The data indicated the fibre Bragg grating sensing system proved to be useful and can be used for structural health monitoring in any environment. From the experimental data for the slab and beams, the moments were found to be64.33 kN.m, 64.35 kN.m and 45.20 kN.m (from the experimental bending moment diagram), and as per the idealistic (Ultimate Limit State), the data of 133 kN.m and 226.2 kN.m were obtained. The difference in values gave room for an early warning system, in other words, a reserve capacity of approximately 50% to failure.Keywords: fibre bragg grating, structural health monitoring, fibre optic sensors, beams
Procedia PDF Downloads 139217 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds
Authors: Zeina Merabi, Arij Dao
Abstract:
The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration
Procedia PDF Downloads 65