Search results for: MSW quantity prediction
2326 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 1392325 CFD Study of Subcooled Boiling Flow at Elevated Pressure Using a Mechanistic Wall Heat Partitioning Model
Authors: Machimontorn Promtong, Sherman C. P. Cheung, Guan H. Yeoh, Sara Vahaji, Jiyuan Tu
Abstract:
The wide range of industrial applications involved with boiling flows promotes the necessity of establishing fundamental knowledge in boiling flow phenomena. For this purpose, a number of experimental and numerical researches have been performed to elucidate the underlying physics of this flow. In this paper, the improved wall boiling models, implemented on ANSYS CFX 14.5, were introduced to study subcooled boiling flow at elevated pressure. At the heated wall boundary, the Fractal model, Force balance approach and Mechanistic frequency model are given for predicting the nucleation site density, bubble departure diameter, and bubble departure frequency. The presented wall heat flux partitioning closures were modified to consider the influence of bubble sliding along the wall before the lift-off, which usually happens in the flow boiling. The simulation was performed based on the Two-fluid model, where the standard k-ω SST model was selected for turbulence modelling. Existing experimental data at around 5 bars were chosen to evaluate the accuracy of the presented mechanistic approach. The void fraction and Interfacial Area Concentration (IAC) are in good agreement with the experimental data. However, the predicted bubble velocity and Sauter Mean Diameter (SMD) are over-predicted. This over-prediction may be caused by consideration of only dispersed and spherical bubbles in the simulations. In the future work, the important physical mechanisms of bubbles, such as merging and shrinking during sliding on the heated wall will be incorporated into this mechanistic model to enhance its capability for a wider range of flow prediction.Keywords: subcooled boiling flow, computational fluid dynamics (CFD), mechanistic approach, two-fluid model
Procedia PDF Downloads 3182324 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 1222323 Precipitation Intensity: Duration Based Threshold Analysis for Initiation of Landslides in Upper Alaknanda Valley
Authors: Soumiya Bhattacharjee, P. K. Champati Ray, Shovan L. Chattoraj, Mrinmoy Dhara
Abstract:
The entire Himalayan range is globally renowned for rainfall-induced landslides. The prime focus of the study is to determine rainfall based threshold for initiation of landslides that can be used as an important component of an early warning system for alerting stake holders. This research deals with temporal dimension of slope failures due to extreme rainfall events along the National Highway-58 from Karanprayag to Badrinath in the Garhwal Himalaya, India. Post processed 3-hourly rainfall intensity data and its corresponding duration from daily rainfall data available from Tropical Rainfall Measuring Mission (TRMM) were used as the prime source of rainfall data. Landslide event records from Border Road Organization (BRO) and some ancillary landslide inventory data for 2013 and 2014 have been used to determine Intensity Duration (ID) based rainfall threshold. The derived governing threshold equation, I= 4.738D-0.025, has been considered for prediction of landslides of the study region. This equation was validated with an accuracy of 70% landslides during August and September 2014. The derived equation was considered for further prediction of landslides of the study region. From the obtained results and validation, it can be inferred that this equation can be used for initiation of landslides in the study area to work as a part of an early warning system. Results can significantly improve with ground based rainfall estimates and better database on landslide records. Thus, the study has demonstrated a very low cost method to get first-hand information on possibility of impending landslide in any region, thereby providing alert and better preparedness for landslide disaster mitigation.Keywords: landslide, intensity-duration, rainfall threshold, TRMM, slope, inventory, early warning system
Procedia PDF Downloads 2732322 Evaluation of the Analytic for Hemodynamic Instability as a Prediction Tool for Early Identification of Patient Deterioration
Authors: Bryce Benson, Sooin Lee, Ashwin Belle
Abstract:
Unrecognized or delayed identification of patient deterioration is a key cause of in-hospitals adverse events. Clinicians rely on vital signs monitoring to recognize patient deterioration. However, due to ever increasing nursing workloads and the manual effort required, vital signs tend to be measured and recorded intermittently, and inconsistently causing large gaps during patient monitoring. Additionally, during deterioration, the body’s autonomic nervous system activates compensatory mechanisms causing the vital signs to be lagging indicators of underlying hemodynamic decline. This study analyzes the predictive efficacy of the Analytic for Hemodynamic Instability (AHI) system, an automated tool that was designed to help clinicians in early identification of deteriorating patients. The lead time analysis in this retrospective observational study assesses how far in advance AHI predicted deterioration prior to the start of an episode of hemodynamic instability (HI) becoming evident through vital signs? Results indicate that of the 362 episodes of HI in this study, 308 episodes (85%) were correctly predicted by the AHI system with a median lead time of 57 minutes and an average of 4 hours (240.5 minutes). Of the 54 episodes not predicted, AHI detected 45 of them while the episode of HI was ongoing. Of the 9 undetected, 5 were not detected by AHI due to either missing or noisy input ECG data during the episode of HI. In total, AHI was able to either predict or detect 98.9% of all episodes of HI in this study. These results suggest that AHI could provide an additional ‘pair of eyes’ on patients, continuously filling the monitoring gaps and consequently giving the patient care team the ability to be far more proactive in patient monitoring and adverse event management.Keywords: clinical deterioration prediction, decision support system, early warning system, hemodynamic status, physiologic monitoring
Procedia PDF Downloads 1872321 Experimental Study on the Molecular Spring Isolator
Authors: Muchun Yu, Xue Gao, Qian Chen
Abstract:
As a novel passive vibration isolation technology, molecular spring isolator (MSI) is investigated in this paper. An MSI consists of water and hydrophobic zeolites as working medium. Under periodic excitation, water molecules intrude into hydrophobic pores of zeolites when the pressure rises and water molecules extrude from hydrophobic pores when pressure drops. At the same time, energy is stored, released and dissipated. An MSI of piston-cylinder structure was designed in this work. Experiments were conducted to investigate the stiffness properties of MSI. The results show that MSI exhibits high-static-low dynamic (HSLD) stiffness. Furthermore, factors such as the quantity of zeolites, temperature, and ions in water are proved to have an influence on the stiffness properties of MSI.Keywords: hydrophobic zeolites, molecular spring, stiffness, vibration isolation
Procedia PDF Downloads 4762320 Experimental Investigation on Correlation Between Permeability Variation and Sabkha Soil Salts Dissolution
Authors: Fahad A. Alotaibi
Abstract:
An increase in salt dissolution rate with continuous water flow is expected to lead to the progressive collapse of the soil structure. Evaluation of the relationship between soil salt dissolution and the variation of sabkha soil permeability in terms of type, rate, and quantity in order to assure construction safety in these environments. The current study investigates the relationship of soil permeability with the rate of dissolution of calcium (Ca2+), sulfate (SO4-2), chloride (CL−1), magnesium (Mg2+), sodium (Na+), and potassium (K+1) ions. Results revealed an increase in sabkha soil permeability with the rate of ions dissolution. This makes the efficiency of using a waterproofing stabilization agent in the reduction of sabkha salts dissolution the main criterion is selecting suitable stabilizing method.Keywords: sabkha, permeability, salts, dissolution
Procedia PDF Downloads 1062319 Effect of Germination on Nutritional Values of Isolates from Two Varieties (DAS and BS) of Under-Utilized Nigerian Cultivated Solojo Cowpea (Vigna Unguiculata L. Walp)
Authors: Henry O. Chibudike, Olubamike A. Adeyoju, Bolanle O. Oluwole, Kayode O. Adebowale, Bamidele I. Olu-Owolabi, Chinedum E. Chibudike
Abstract:
Studies on the Mineral Content of Solojo Flour and Protein Isolates from the two varieties (DAS and BS) of Nigeria cultivated solojo cowpeas were conducted to determine their nutritional value. These inorganic elements or minerals were classified into 3 categories: the ultra-trace minerals, which are the third category; the microelements, also known as the trace minerals, in the second category; while the first category is the macro elements, also known as major minerals. Some of the macro-elements are Ca, P, Na and Cl; the second category, micro-elements include iron, copper, cobalt, potassium, magnesium, iodine, zinc, manganese, molybdenum, F, Cr, Se and S. Results show that the proportion of Sodium (Na) which is ingested into the body in the form of NaCl through food intake maintenance of body pH and to retain water ranged from 728.97 to 253.37 ppm (72.90 to 25.34 mg/100 g); 715.24 to 235.45 ppm; 735.28 to 270.37 ppm; 726.59 to 264.35ppm, for FFDAS, FFBS, DAS and BS respectively with all values of the germinated samples all bellow the control. While FFDAS iron content ranged from 4.25 to 13.50 mg/100 g; FFBS ranged from 3.15 to 12.56 mg/100 g; DAS ranged from 3.81 to 12.90 mg/100g; BS ranged from 3.42 to 9.40 mg/100 g. The values of the germinated flours were all greater than the ungerminated flour. Iron helps to transport oxygen round the body and also helps in red blood cells building and to convert food into needed energy by the body. While Manganese an element that is needed in micro quantity but necessary to convert food into energy, is also crucial for healthy bone and cartilage creation. Results also show that zinc quantity increased as germination proceeded, and the values ranged from 38.80 ppm to 230.00 ppm (3.880 mg/100 g to 23.00 mg/100 g; 0.003880% to 0.0230%); 40.84 to 250.01 ppm; 32.85 to 93.41 ppm; 37.07 to 115.00 ppm, for FFDAS, FFBS, DAS and BS respectively. The Ca content improved significantly (p<0.05) with sprouting; the value extended from 250.56 ppm to 760.03 ppm (25.056 to 76.00 mg/100g or 0.0251 to 0.0760 %); 400.40 to 998.22 ppm; 116.87 to 195.69 ppm; 113.48 to 220.75 ppm, for FFDAS, FFBS, DAS and BS respectively. Zinc element although needed at the micro level in the body, is essential for a strong immune system to keep the body in good health. It is also crucial for the maintenance of a healthy sense of taste and odor, while Calcium is critical for strong bones and teeth, blood coagulation, and muscle tightening and relaxation. Magnesium is needed to build enzymes and antioxidants and also for healthy bones, while Potassium is needed to maintain water balance, muscle movement, and nerve impulses. It functions in conjunction with Na to regulate blood pressure.Keywords: Solojo cowpea, underutilized legumes, protein isolates, BS, DAS, ungerminated
Procedia PDF Downloads 582318 A Prediction of Cutting Forces Using Extended Kienzle Force Model Incorporating Tool Flank Wear Progression
Authors: Wu Peng, Anders Liljerehn, Martin Magnevall
Abstract:
In metal cutting, tool wear gradually changes the micro geometry of the cutting edge. Today there is a significant gap in understanding the impact these geometrical changes have on the cutting forces which governs tool deflection and heat generation in the cutting zone. Accurate models and understanding of the interaction between the work piece and cutting tool leads to improved accuracy in simulation of the cutting process. These simulations are useful in several application areas, e.g., optimization of insert geometry and machine tool monitoring. This study aims to develop an extended Kienzle force model to account for the effect of rake angle variations and tool flank wear have on the cutting forces. In this paper, the starting point sets from cutting force measurements using orthogonal turning tests of pre-machined flanches with well-defined width, using triangular coated inserts to assure orthogonal condition. The cutting forces have been measured by dynamometer with a set of three different rake angles, and wear progression have been monitored during machining by an optical measuring collaborative robot. The method utilizes the measured cutting forces with the inserts flank wear progression to extend the mechanistic cutting forces model with flank wear as an input parameter. The adapted cutting forces model is validated in a turning process with commercial cutting tools. This adapted cutting forces model shows the significant capability of prediction of cutting forces accounting for tools flank wear and different-rake-angle cutting tool inserts. The result of this study suggests that the nonlinear effect of tools flank wear and interaction between the work piece and the cutting tool can be considered by the developed cutting forces model.Keywords: cutting force, kienzle model, predictive model, tool flank wear
Procedia PDF Downloads 1082317 Digital Twin for Retail Store Security
Authors: Rishi Agarwal
Abstract:
Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety
Procedia PDF Downloads 722316 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance
Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem
Abstract:
Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles
Procedia PDF Downloads 3592315 Artificial Neural Network Modeling of a Closed Loop Pulsating Heat Pipe
Authors: Vipul M. Patel, Hemantkumar B. Mehta
Abstract:
Technological innovations in electronic world demand novel, compact, simple in design, less costly and effective heat transfer devices. Closed Loop Pulsating Heat Pipe (CLPHP) is a passive phase change heat transfer device and has potential to transfer heat quickly and efficiently from source to sink. Thermal performance of a CLPHP is governed by various parameters such as number of U-turns, orientations, input heat, working fluids and filling ratio. The present paper is an attempt to predict the thermal performance of a CLPHP using Artificial Neural Network (ANN). Filling ratio and heat input are considered as input parameters while thermal resistance is set as target parameter. Types of neural networks considered in the present paper are radial basis, generalized regression, linear layer, cascade forward back propagation, feed forward back propagation; feed forward distributed time delay, layer recurrent and Elman back propagation. Linear, logistic sigmoid, tangent sigmoid and Radial Basis Gaussian Function are used as transfer functions. Prediction accuracy is measured based on the experimental data reported by the researchers in open literature as a function of Mean Absolute Relative Deviation (MARD). The prediction of a generalized regression ANN model with spread constant of 4.8 is found in agreement with the experimental data for MARD in the range of ±1.81%.Keywords: ANN models, CLPHP, filling ratio, generalized regression, spread constant
Procedia PDF Downloads 2922314 Performance and Voyage Analysis of Marine Gas Turbine Engine, Installed to Power and Propel an Ocean-Going Cruise Ship from Lagos to Jeddah
Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris
Abstract:
An aero-derivative marine Gas Turbine engine model is simulated to be installed as the main propulsion prime mover to power a cruise ship which is designed and routed to transport intending Muslim pilgrims for the annual hajj pilgrimage from Nigeria to the Islamic port city of Jeddah in Saudi Arabia. A performance assessment of the Gas Turbine engine has been conducted by examining the effect of varying aerodynamic and hydrodynamic conditions encountered at various geographical locations along the scheduled transit route during the voyage. The investigation focuses on the overall behavior of the Gas Turbine engine employed to power and propel the ship as it operates under ideal and adverse conditions to be encountered during calm and rough weather according to the different seasons of the year under which the voyage may be undertaken. The variation of engine performance under varying operating conditions has been considered as a very important economic issue by determining the time the speed by which the journey is completed as well as the quantity of fuel required for undertaking the voyage. The assessment also focuses on the increased resistance caused by the fouling of the submerged portion of the ship hull surface with its resultant effect on the power output of the engine as well as the overall performance of the propulsion system. Daily ambient temperature levels were obtained by accessing data from the UK Meteorological Office while the varying degree of turbulence along the transit route and according to the Beaufort scale were also obtained as major input variables of the investigation. By assuming the ship to be navigating the Atlantic Ocean and the Mediterranean Sea during winter, spring and summer seasons, the performance modeling and simulation was accomplished through the use of an integrated Gas Turbine performance simulation code known as ‘Turbomach’ along with a Matlab generated code named ‘Poseidon’, all of which have been developed at the Power and Propulsion Department of Cranfield University. As a case study, the results of the various assumptions have further revealed that the marine Gas Turbine is a reliable and available alternative to the conventional marine propulsion prime movers that have dominated the maritime industry before now. The techno-economic and environmental assessment of this type of propulsion prime mover has enabled the determination of the effect of changes in weather and sea conditions on the ship speed as well as trip time and the quantity of fuel required to be burned throughout the voyage.Keywords: ambient temperature, hull fouling, marine gas turbine, performance, propulsion, voyage
Procedia PDF Downloads 1862313 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization
Procedia PDF Downloads 1562312 A Framework on Data and Remote Sensing for Humanitarian Logistics
Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini
Abstract:
Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making
Procedia PDF Downloads 3792311 Financial Fraud Prediction for Russian Non-Public Firms Using Relational Data
Authors: Natalia Feruleva
Abstract:
The goal of this paper is to develop the fraud risk assessment model basing on both relational and financial data and test the impact of the relationships between Russian non-public companies on the likelihood of financial fraud commitment. Relationships mean various linkages between companies such as parent-subsidiary relationship and person-related relationships. These linkages may provide additional opportunities for committing fraud. Person-related relationships appear when firms share a director, or the director owns another firm. The number of companies belongs to CEO and managed by CEO, the number of subsidiaries was calculated to measure the relationships. Moreover, the dummy variable describing the existence of parent company was also included in model. Control variables such as financial leverage and return on assets were also implemented because they describe the motivating factors of fraud. To check the hypotheses about the influence of the chosen parameters on the likelihood of financial fraud, information about person-related relationships between companies, existence of parent company and subsidiaries, profitability and the level of debt was collected. The resulting sample consists of 160 Russian non-public firms. The sample includes 80 fraudsters and 80 non-fraudsters operating in 2006-2017. The dependent variable is dichotomous, and it takes the value 1 if the firm is engaged in financial crime, otherwise 0. Employing probit model, it was revealed that the number of companies which belong to CEO of the firm or managed by CEO has significant impact on the likelihood of financial fraud. The results obtained indicate that the more companies are affiliated with the CEO, the higher the likelihood that the company will be involved in financial crime. The forecast accuracy of the model is about is 80%. Thus, the model basing on both relational and financial data gives high level of forecast accuracy.Keywords: financial fraud, fraud prediction, non-public companies, regression analysis, relational data
Procedia PDF Downloads 1192310 Experimental Study of the Sound Absorption of a Geopolymer Panel with a Textile Component Designed for a Railway Corridor
Authors: Ludmila Fridrichová, Roman Knížek, Pavel Němeček, Katarzyna Ewa Buczkowska
Abstract:
The design of the sound absorption panel, which consists of three layers, is presented in this study. The first layer of the panel is perforated and provides sound transmission to the inner part of the panel. The second layer is composed of a bulk material whose purpose is to absorb as much noise as possible. The third layer of the panel has two functions: the first function is to ensure the strength of the panel, and the second function is to reflect the sound back into the bulk layer. Experimental results have shown that the size of the holes in the perforated panel affects the sound absorption of the required frequency. The percentage of filling of the perforated area affects the quantity of sound absorbed.Keywords: sound absorption, railway corridor, health, textile waste, natural fibres, concrete
Procedia PDF Downloads 162309 Predictability of Kiremt Rainfall Variability over the Northern Highlands of Ethiopia on Dekadal and Monthly Time Scales Using Global Sea Surface Temperature
Authors: Kibrom Hadush
Abstract:
Countries like Ethiopia, whose economy is mainly rain-fed dependent agriculture, are highly vulnerable to climate variability and weather extremes. Sub-seasonal (monthly) and dekadal forecasts are hence critical for crop production and water resource management. Therefore, this paper was conducted to study the predictability and variability of Kiremt rainfall over the northern half of Ethiopia on monthly and dekadal time scales in association with global Sea Surface Temperature (SST) at different lag time. Trends in rainfall have been analyzed on annual, seasonal (Kiremt), monthly, and dekadal (June–September) time scales based on rainfall records of 36 meteorological stations distributed across four homogenous zones of the northern half of Ethiopia for the period 1992–2017. The results from the progressive Mann–Kendall trend test and the Sen’s slope method shows that there is no significant trend in the annual, Kiremt, monthly and dekadal rainfall total at most of the station's studies. Moreover, the rainfall in the study area varies spatially and temporally, and the distribution of the rainfall pattern increases from the northeast rift valley to northwest highlands. Methods of analysis include graphical correlation and multiple linear regression model are employed to investigate the association between the global SSTs and Kiremt rainfall over the homogeneous rainfall zones and to predict monthly and dekadal (June-September) rainfall using SST predictors. The results of this study show that in general, SST in the equatorial Pacific Ocean is the main source of the predictive skill of the Kiremt rainfall variability over the northern half of Ethiopia. The regional SSTs in the Atlantic and the Indian Ocean as well contribute to the Kiremt rainfall variability over the study area. Moreover, the result of the correlation analysis showed that the decline of monthly and dekadal Kiremt rainfall over most of the homogeneous zones of the study area are caused by the corresponding persistent warming of the SST in the eastern and central equatorial Pacific Ocean during the period 1992 - 2017. It is also found that the monthly and dekadal Kiremt rainfall over the northern, northwestern highlands and northeastern lowlands of Ethiopia are positively correlated with the SST in the western equatorial Pacific, eastern and tropical northern the Atlantic Ocean. Furthermore, the SSTs in the western equatorial Pacific and Indian Oceans are positively correlated to the Kiremt season rainfall in the northeastern highlands. Overall, the results showed that the prediction models using combined SSTs at various ocean regions (equatorial and tropical) performed reasonably well in the prediction (With R2 ranging from 30% to 65%) of monthly and dekadal rainfall and recommends it can be used for efficient prediction of Kiremt rainfall over the study area to aid with systematic and informed decision making within the agricultural sector.Keywords: dekadal, Kiremt rainfall, monthly, Northern Ethiopia, sea surface temperature
Procedia PDF Downloads 1412308 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming
Authors: Milind Chaudhari, Suhail Balasinor
Abstract:
Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.Keywords: big data, IoT, vertical farming, indoor farming
Procedia PDF Downloads 1752307 Design of Sustainable Concrete Pavement by Incorporating RAP Aggregates
Authors: Selvam M., Vadthya Poornachandar, Surender Singh
Abstract:
These Reclaimed Asphalt Pavement (RAP) aggregates are generally dumped in the open area after the demolition of Asphalt Pavements. The utilization of RAP aggregates in cement concrete pavements may provide several socio-economic-environmental benefits and could embrace the circular economy. The cross recycling of RAP aggregates in the concrete pavement could reduce the consumption of virgin aggregates and saves the fertile land. However, the structural, as well as functional properties of RAP-concrete could be significantly lower than the conventional Pavement Quality Control (PQC) pavements. This warrants judicious selection of RAP fraction (coarse and fine aggregates) along with the accurate proportion of the same for PQC highways. Also, the selection of the RAP fraction and its proportion shall not be solely based on the mechanical properties of RAP-concrete specimens but also governed by the structural and functional behavior of the pavement system. In this study, an effort has been made to predict the optimum RAP fraction and its corresponding proportion for cement concrete pavements by considering the low-volume and high-volume roads. Initially, the effect of inclusions of RAP on the fresh and mechanical properties of concrete pavement mixes is mapped through an extensive literature survey. Almost all the studies available to date are considered for this study. Generally, Indian Roads Congress (IRC) methods are the most widely used design method in India for the analysis of concrete pavements, and the same has been considered for this study. Subsequently, fatigue damage analysis is performed to evaluate the required safe thickness of pavement slab for different fractions of RAP (coarse RAP). Consequently, the performance of RAP-concrete is predicted by employing the AASHTO-1993 model for the following distresses conditions: faulting, cracking, and smoothness. The performance prediction and total cost analysis of RAP aggregates depict that the optimum proportions of coarse RAP aggregates in the PQC mix are 35% and 50% for high volume and low volume roads, respectively.Keywords: concrete pavement, RAP aggregate, performance prediction, pavement design
Procedia PDF Downloads 1582306 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 1062305 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects
Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes
Abstract:
Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction
Procedia PDF Downloads 1492304 Use of GIS and Remote Sensing for Calculating the Installable Photovoltaic and Thermal Power on All the Roofs of the City of Aix-en-Provence, France
Authors: Sofiane Bourchak, Sébastien Bridier
Abstract:
The objective of this study is to show how to calculate and map solar energy’s quantity (instantaneous and accumulated global solar radiation during the year) available on roofs in the city Aix-en-Provence which has a population of 140,000 inhabitants. The result is a geographic information system (GIS) layer, which represents hourly and monthly the production of solar energy on roofs throughout the year. Solar energy professionals can use it to optimize implementations and to size energy production systems. The results are presented as a set of maps, tables and histograms in order to determine the most effective costs in Aix-en-Provence in terms of photovoltaic power (electricity) and thermal power (hot water).Keywords: geographic information system, photovoltaic, thermal, solar potential, solar radiation
Procedia PDF Downloads 4362303 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 7272302 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 922301 A Novel Epitope Prediction for Vaccine Designing against Ebola Viral Envelope Proteins
Authors: Manju Kanu, Subrata Sinha, Surabhi Johari
Abstract:
Viral proteins of Ebola viruses belong to one of the best studied viruses; however no effective prevention against EBOV has been developed. Epitope-based vaccines provide a new strategy for prophylactic and therapeutic application of pathogen-specific immunity. A critical requirement of this strategy is the identification and selection of T-cell epitopes that act as vaccine targets. This study describes current methodologies for the selection process, with Ebola virus as a model system. Hence great challenge in the field of ebola virus research is to design universal vaccine. A combination of publicly available bioinformatics algorithms and computational tools are used to screen and select antigen sequences as potential T-cell epitopes of supertypes Human Leukocyte Antigen (HLA) alleles. MUSCLE and MOTIF tools were used to find out most conserved peptide sequences of viral proteins. Immunoinformatics tools were used for prediction of immunogenic peptides of viral proteins in zaire strains of Ebola virus. Putative epitopes for viral proteins (VP) were predicted from conserved peptide sequences of VP. Three tools NetCTL 1.2, BIMAS and Syfpeithi were used to predict the Class I putative epitopes while three tools, ProPred, IEDB-SMM-align and NetMHCII 2.2 were used to predict the Class II putative epitopes. B cell epitopes were predicted by BCPREDS 1.0. Immunogenic peptides were identified and selected manually by putative epitopes predicted from online tools individually for both MHC classes. Finally sequences of predicted peptides for both MHC classes were looked for common region which was selected as common immunogenic peptide. The immunogenic peptides were found for viral proteins of Ebola virus: epitopes FLESGAVKY, SSLAKHGEY. These predicted peptides could be promising candidates to be used as target for vaccine design.Keywords: epitope, b cell, immunogenicity, ebola
Procedia PDF Downloads 3142300 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition
Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria
Abstract:
Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses
Procedia PDF Downloads 1552299 Practical Modelling of RC Structural Walls under Monotonic and Cyclic Loading
Authors: Reza E. Sedgh, Rajesh P. Dhakal
Abstract:
Shear walls have been used extensively as the main lateral force resisting systems in multi-storey buildings. The recent development in performance based design urges practicing engineers to conduct nonlinear static or dynamic analysis to evaluate seismic performance of multi-storey shear wall buildings by employing distinct analytical models suggested in the literature. For practical purpose, application of macroscopic models to simulate the global and local nonlinear behavior of structural walls outweighs the microscopic models. The skill level, computational time and limited access to RC specialized finite element packages prevents the general application of this method in performance based design or assessment of multi-storey shear wall buildings in design offices. Hence, this paper organized to verify capability of nonlinear shell element in commercially available package (Sap2000) in simulating results of some specimens under monotonic and cyclic loads with very oversimplified available cyclic material laws in the analytical tool. The selection of constitutive models, the determination of related parameters of the constituent material and appropriate nonlinear shear model are presented in detail. Adoption of proposed simple model demonstrated that the predicted results follow the overall trend of experimental force-displacement curve. Although, prediction of ultimate strength and the overall shape of hysteresis model agreed to some extent with experiment, the ultimate displacement(significant strength degradation point) prediction remains challenging in some cases.Keywords: analytical model, nonlinear shell element, structural wall, shear behavior
Procedia PDF Downloads 4042298 Trauma Scores and Outcome Prediction After Chest Trauma
Authors: Mohamed Abo El Nasr, Mohamed Shoeib, Abdelhamid Abdelkhalik, Amro Serag
Abstract:
Background: Early assessment of severity of chest trauma, either blunt or penetrating is of critical importance in prediction of patient outcome. Different trauma scoring systems are widely available and are based on anatomical or physiological parameters to expect patient morbidity or mortality. Up till now, there is no ideal, universally accepted trauma score that could be applied in all trauma centers and is suitable for assessment of severity of chest trauma patients. Aim: Our aim was to compare various trauma scoring systems regarding their predictability of morbidity and mortality in chest trauma patients. Patients and Methods: This study was a prospective study including 400 patients with chest trauma who were managed at Tanta University Emergency Hospital, Egypt during a period of 2 years (March 2014 until March 2016). The patients were divided into 2 groups according to the mode of trauma: blunt or penetrating. The collected data included age, sex, hemodynamic status on admission, intrathoracic injuries, and associated extra-thoracic injuries. The patients outcome including mortality, need of thoracotomy, need for ICU admission, need for mechanical ventilation, length of hospital stay and the development of acute respiratory distress syndrome were also recorded. The relevant data were used to calculate the following trauma scores: 1. Anatomical scores including abbreviated injury scale (AIS), Injury severity score (ISS), New injury severity score (NISS) and Chest wall injury scale (CWIS). 2. Physiological scores including revised trauma score (RTS), Acute physiology and chronic health evaluation II (APACHE II) score. 3. Combined score including Trauma and injury severity score (TRISS ) and 4. Chest-Specific score Thoracic trauma severity score (TTSS). All these scores were analyzed statistically to detect their sensitivity, specificity and compared regarding their predictive power of mortality and morbidity in blunt and penetrating chest trauma patients. Results: The incidence of mortality was 3.75% (15/400). Eleven patients (11/230) died in blunt chest trauma group, while (4/170) patients died in penetrating trauma group. The mortality rate increased more than three folds to reach 13% (13/100) in patients with severe chest trauma (ISS of >16). The physiological scores APACHE II and RTS had the highest predictive value for mortality in both blunt and penetrating chest injuries. The physiological score APACHE II followed by the combined score TRISS were more predictive for intensive care admission in penetrating injuries while RTS was more predictive in blunt trauma. Also, RTS had a higher predictive value for expectation of need for mechanical ventilation followed by the combined score TRISS. APACHE II score was more predictive for the need of thoracotomy in penetrating injuries and the Chest-Specific score TTSS was higher in blunt injuries. The anatomical score ISS and TTSS score were more predictive for prolonged hospital stay in penetrating and blunt injuries respectively. Conclusion: Trauma scores including physiological parameters have a higher predictive power for mortality in both blunt and penetrating chest trauma. They are more suitable for assessment of injury severity and prediction of patients outcome.Keywords: chest trauma, trauma scores, blunt injuries, penetrating injuries
Procedia PDF Downloads 4212297 Smart Technology for Hygrothermal Performance of Low Carbon Material Using an Artificial Neural Network Model
Authors: Manal Bouasria, Mohammed-Hichem Benzaama, Valérie Pralong, Yassine El Mendili
Abstract:
Reducing the quantity of cement in cementitious composites can help to reduce the environmental effect of construction materials. By-products such as ferronickel slags (FNS), fly ash (FA), and Crepidula fornicata (CR) are promising options for cement replacement. In this work, we investigated the relevance of substituting cement with FNS-CR and FA-CR on the mechanical properties of mortar and on the thermal properties of concrete. Foraging intervals ranging from 2 to 28 days, the mechanical properties are obtained by 3-point bending and compression tests. The chosen mix is used to construct a prototype in order to study the material’s hygrothermal performance. The data collected by the sensors placed on the prototype was utilized to build an artificial neural network.Keywords: artificial neural network, cement, circular economy, concrete, by products
Procedia PDF Downloads 114