Search results for: deep neural models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9152

Search results for: deep neural models

5852 Estimation of Probabilistic Fatigue Crack Propagation Models of AZ31 Magnesium Alloys under Various Load Ratio Conditions by Using the Interpolation of a Random Variable

Authors: Seon Soon Choi

Abstract:

The essential purpose is to present the good fatigue crack propagation model describing a stochastic fatigue crack growth behavior in a rolled magnesium alloy, AZ31, under various load ratio conditions. Fatigue crack propagation experiments were carried out in laboratory air under four conditions of load ratio, R, using AZ31 to investigate the crack growth behavior. The stochastic fatigue crack growth behavior was analyzed using an interpolation of random variable, Z, introduced to an empirical fatigue crack propagation model. The empirical fatigue models used in this study are Paris-Erdogan model, Walker model, Forman model, and modified Forman model. It was found that the random variable is useful in describing the stochastic fatigue crack growth behaviors under various load ratio conditions. The good probabilistic model describing a stochastic fatigue crack growth behavior under various load ratio conditions was also proposed.

Keywords: magnesium alloys, fatigue crack propagation model, load ratio, interpolation of random variable

Procedia PDF Downloads 402
5851 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: 6D posture estimation, image recognition, deep learning, AlexNet

Procedia PDF Downloads 139
5850 Knowledge Discovery from Production Databases for Hierarchical Process Control

Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata

Abstract:

The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system, thus, the proposed solution has been verified. The paper documents how it is possible to apply new discovery knowledge to be used in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.

Keywords: hierarchical process control, knowledge discovery from databases, neural network, process control

Procedia PDF Downloads 467
5849 Replacement of the Distorted Dentition of the Cone Beam Computed Tomography Scan Models for Orthognathic Surgery Planning

Authors: T. Almutairi, K. Naudi, N. Nairn, X. Ju, B. Eng, J. Whitters, A. Ayoub

Abstract:

Purpose: At present Cone Beam Computed Tomography (CBCT) imaging does not record dental morphology accurately due to the scattering produced by metallic restorations and the reported magnification. The aim of this pilot study is the development and validation of a new method for the replacement of the distorted dentition of CBCT scans with the dental image captured by the digital intraoral camera. Materials and Method: Six dried skulls with orthodontics brackets on the teeth were used in this study. Three intra-oral markers made of dental stone were constructed which were attached to orthodontics brackets. The skulls were CBCT scanned, and occlusal surface was captured using TRIOS® 3D intraoral scanner. Marker based and surface based registrations were performed to fuse the digital intra-oral scan(IOS) into the CBCT models. This produced a new composite digital model of the skull and dentition. The skulls were scanned again using the commercially accurate Laser Faro® arm to produce the 'gold standard' model for the assessment of the accuracy of the developed method. The accuracy of the method was assessed by measuring the distance between the occlusal surfaces of the new composite model and the 'gold standard' 3D model of the skull and teeth. The procedure was repeated a week apart to measure the reproducibility of the method. Results: The results showed no statistically significant difference between the measurements on the first and second occasions. The absolute mean distance between the new composite model and the laser model ranged between 0.11 mm to 0.20 mm. Conclusion: The dentition of the CBCT can be accurately replaced with the dental image captured by the intra-oral scanner to create a composite model. This method will improve the accuracy of orthognathic surgical prediction planning, with the final goal of the fabrication of a physical occlusal wafer without to guide orthognathic surgery and eliminate the need for dental impression.

Keywords: orthognathic surgery, superimposition, models, cone beam computed tomography

Procedia PDF Downloads 176
5848 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 27
5847 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution

Procedia PDF Downloads 252
5846 Stability Assessment of Chamshir Dam Based on DEM, South West Zagros

Authors: Rezvan Khavari

Abstract:

The Zagros fold-thrust belt in SW Iran is a part of the Alpine-Himalayan system which consists of a variety of structures with different sizes or geometries. The study area is Chamshir Dam, which is located on the Zohreh River, 20 km southeast of Gachsaran City (southwest Iran). The satellite images are valuable means available to geologists for locating geological or geomorphological features expressing regional fault or fracture systems, therefore, the satellite images were used for structural analysis of the Chamshir dam area. As well, using the DEM and geological maps, 3D Models of the area have been constructed. Then, based on these models, all the acquired fracture traces data were integrated in Geographic Information System (GIS) environment by using Arc GIS software. Based on field investigation and DEM model, main structures in the area consist of Cham Shir syncline and two fault sets, the main thrust faults with NW-SE direction and small normal faults in NE-SW direction. There are three joint sets in the study area, both of them (J1 and J3) are the main large fractures around the Chamshir dam. These fractures indeed consist with the normal faults in NE-SW direction. The third joint set in NW-SE is normal to the others. In general, according to topography, geomorphology and structural geology evidences, Chamshir dam has a potential for sliding in some parts of Gachsaran formation.

Keywords: DEM, chamshir dam, zohreh river, satellite images

Procedia PDF Downloads 469
5845 Numerical Pricing of Financial Options under Irrational Exercise Times and Regime-Switching Models

Authors: Mohammad Saber Rohi, Saghar Heidari

Abstract:

In this paper, we studied the pricing problem of American options under a regime-switching model with the possibility of a non-optimal exercise policy (early or late exercise time) which is called an irrational strategy. For this, we consider a Markovmodulated model for the dynamic of the underlying asset as an alternative model to the classical Balck-Scholes-Merton model (BSM) and an intensity-based model for the irrational strategy, to provide more realistic results for American option prices under the irrational behavior in real financial markets. Applying a partial differential equation (PDE) approach, the pricing problem of American options under regime-switching models can be formulated as coupled PDEs. To solve the resulting systems of PDEs in this model, we apply a finite element method as the numerical solving procedure to the resulting variational inequality. Under some appropriate assumptions, we establish the stability of the method and compare its accuracy to some recent works to illustrate the suitability of the proposed model and the accuracy of the applied numerical method for the pricing problem of American options under the regime-switching model with irrational behaviors.

Keywords: irrational exercise strategy, rationality parameter, regime-switching model, American option, finite element method, variational inequality

Procedia PDF Downloads 60
5844 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring

Procedia PDF Downloads 139
5843 Effect of Base Coarse Layer on Load-Settlement Characteristics of Sandy Subgrade Using Plate Load Test

Authors: A. Nazeri, R. Ziaie Moayed, H. Ghiasinejad

Abstract:

The present research has been performed to investigate the effect of base course application on load-settlement characteristics of sandy subgrade using plate load test. The main parameter investigated in this study was the subgrade reaction coefficient. The model tests were conducted in a 1.35 m long, 1 m wide, and 1 m deep steel test box of Imam Khomeini International University (IKIU Calibration Chamber). The base courses used in this research were in three different thicknesses of 15 cm, 20 cm, and 30 cm. The test results indicated that in the case of using base course over loose sandy subgrade, the values of subgrade reaction coefficient can be increased from 7  to 132 , 224 , and 396  in presence of 15 cm, 20 cm, and 30 cm base course, respectively.

Keywords: modulus of subgrade reaction, plate load test, base course, sandy subgrade

Procedia PDF Downloads 237
5842 Parallel Computing: Offloading Matrix Multiplication to GPU

Authors: Bharath R., Tharun Sai N., Bhuvan G.

Abstract:

This project focuses on developing a Parallel Computing method aimed at optimizing matrix multiplication through GPU acceleration. Addressing algorithmic challenges, GPU programming intricacies, and integration issues, the project aims to enhance efficiency and scalability. The methodology involves algorithm design, GPU programming, and optimization techniques. Future plans include advanced optimizations, extended functionality, and integration with high-level frameworks. User engagement is emphasized through user-friendly interfaces, open- source collaboration, and continuous refinement based on feedback. The project's impact extends to significantly improving matrix multiplication performance in scientific computing and machine learning applications.

Keywords: matrix multiplication, parallel processing, cuda, performance boost, neural networks

Procedia PDF Downloads 35
5841 Prototyping a Portable, Affordable Sign Language Glove

Authors: Vidhi Jain

Abstract:

Communication between speakers and non-speakers of American Sign Language (ASL) can be problematic, inconvenient, and expensive. This project attempts to bridge the communication gap by designing a portable glove that captures the user’s ASL gestures and outputs the translated text on a smartphone. The glove is equipped with flex sensors, contact sensors, and a gyroscope to measure the flexion of the fingers, the contact between fingers, and the rotation of the hand. The glove’s Arduino UNO microcontroller analyzes the sensor readings to identify the gesture from a library of learned gestures. The Bluetooth module transmits the gesture to a smartphone. Using this device, one day speakers of ASL may be able to communicate with others in an affordable and convenient way.

Keywords: sign language, morse code, convolutional neural network, American sign language, gesture recognition

Procedia PDF Downloads 49
5840 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 195
5839 The Kafrah Dam (The Oldest Dam in History)

Authors: Mohamed Bekhit Gad Khalil

Abstract:

This dam is the oldest dam in history. It was built by the ancient Egyptian around (2650 B.C) control flooding. It is believed to have been built between the third and fourth dynasties .It contains the oldest dam in history. Many studies have been conducted for the dam. This report was prepared under my supervision and in cooperation with the Ministry of Tourism and Antiquities. The dam was re-documented and photographed again. The dam on the northern side Consists of irregularly shaped stones of varying sizes used randomly. Sand and soil fill the gaps between the stones. creating layers to form the body of the dam. The eastern. side of the dam Consists of a series of regular shaped stones that have been cut and constructed into a stepped pyramid-like structure with width of (15,7) meters and height of (10) meters. The surface has significant erosion and wear on the stones due to weather Conditions. which has resulted in deep cavities in most of the stone blocks forming the surface.

Keywords: ministry of tourism and antiquities, excavations, registration, documentation

Procedia PDF Downloads 21
5838 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 173
5837 Radar-Based Classification of Pedestrian and Dog Using High-Resolution Raw Range-Doppler Signatures

Authors: C. Mayr, J. Periya, A. Kariminezhad

Abstract:

In this paper, we developed a learning framework for the classification of vulnerable road users (VRU) by their range-Doppler signatures. The frequency-modulated continuous-wave (FMCW) radar raw data is first pre-processed to obtain robust object range-Doppler maps per coherent time interval. The complex-valued range-Doppler maps captured from our outdoor measurements are further fed into a convolutional neural network (CNN) to learn the classification. This CNN has gone through a hyperparameter optimization process for improved learning. By learning VRU range-Doppler signatures, the three classes 'pedestrian', 'dog', and 'noise' are classified with an average accuracy of almost 95%. Interestingly, this classification accuracy holds for a combined longitudinal and lateral object trajectories.

Keywords: machine learning, radar, signal processing, autonomous driving

Procedia PDF Downloads 225
5836 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 108
5835 Development of Evolutionary Algorithm by Combining Optimization and Imitation Approach for Machine Learning in Gaming

Authors: Rohit Mittal, Bright Keswani, Amit Mithal

Abstract:

This paper provides a sense about the application of computational intelligence techniques used to develop computer games, especially car racing. For the deep sense and knowledge of artificial intelligence, this paper is divided into various sections that is optimization, imitation, innovation and combining approach of optimization and imitation. This paper is mainly concerned with combining approach which tells different aspects of using fitness measures and supervised learning techniques used to imitate aspects of behavior. The main achievement of this paper is based on modelling player behaviour and evolving new game content such as racing tracks as single car racing on single track.

Keywords: evolution algorithm, genetic, optimization, imitation, racing, innovation, gaming

Procedia PDF Downloads 630
5834 Fluid Catalytic Cracking: Zeolite Catalyzed Chemical Industry Processes

Authors: Mithil Pandey, Ragunathan Bala Subramanian

Abstract:

One of the major conversion technologies in the oil refinery industry is Fluid catalytic cracking (FCC) which produces the majority of the world’s gasoline. Some useful products are generated from the vacuum gas oil, heavy gas oil and residue feedstocks by the FCC unit in an oil refinery. Moreover, Zeolite catalysts (zeo-catalysts) have found widespread applications and have proved to be substantial and paradigmatic in oil refining and petrochemical processes, such as FCC because of their porous features. Several famous zeo-catalysts have been fabricated and applied in industrial processes as milestones in history, and have brought on huge changes in petrochemicals. So far, more than twenty types of zeolites have been industrially applied, and their versatile porous architectures with their essential features have contributed to affect the catalytic efficiency. This poster depicts the evolution of pore models in zeolite catalysts which are accompanied by an increase in environmental and demands. The crucial roles of modulating pore models are outlined for zeo-catalysts for the enhancement of their catalytic performances in various industrial processes. The development of industrial processes for the FCC process, aromatic conversions and olefin production, makes it obvious that the pore architecture plays a very important role in zeo-catalysis processes. By looking at the different necessities of industrial processes, rational construction of the pore model is critically essential. Besides, the pore structure of the zeolite would have a substantial and direct effect on the utilization efficiency of the zeo-catalyst.

Keywords: catalysts, fluid catalytic cracking, industrial processes, zeolite

Procedia PDF Downloads 332
5833 Detection and Quantification of Active Pharmaceutical Ingredients as Adulterants in Garcinia cambogia Slimming Preparations Using NIR Spectroscopy Combined with Chemometrics

Authors: Dina Ahmed Selim, Eman Shawky Anwar, Rasha Mohamed Abu El-Khair

Abstract:

A rapid, simple and efficient method with minimal sample treatment was developed for authentication of Garcinia cambogia fruit peel powder, along with determining undeclared active pharmaceutical ingredients (APIs) in its herbal slimming dietary supplements using near infrared spectroscopy combined with chemometrics. Five featured adulterants, including sibutramine, metformin, orlistat, ephedrine, and theophylline are selected as target compounds. The Near infrared spectral data matrix of authentic Garcinia cambogia fruit peel and specimens degraded by intentional contamination with the five selected APIs was subjected to hierarchical clustering analysis to investigate their bundling figure. SIMCA models were established to ensure the genuiness of Garcinia cambogia fruit peel which resulted in perfect classification of all tested specimens. Adulterated samples were utilized for construction of PLSR models based on different APIs contents at minute levels of fraud practices (LOQ < 0.2% w/w).The suggested approach can be applied to enhance and guarantee the safety and quality of Garcinia fruit peel powder as raw material and in dietary supplements.

Keywords: Garcinia cambogia, Quality control, NIR spectroscopy, Chemometrics

Procedia PDF Downloads 68
5832 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 113
5831 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets

Authors: Najmeh Abedzadeh, Matthew Jacobs

Abstract:

An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.

Keywords: IDS, imbalanced datasets, sampling algorithms, big data

Procedia PDF Downloads 302
5830 Ecosystem Carbon Stocks Vary in Reference to the Models Used, Socioecological Factors and Agroforestry Practices in Central Ethiopia

Authors: Gadisa Demie, Mesele Negash, Zerihun Asrat, Lojka Bohdan

Abstract:

Deforestation and forest degradation in the tropics have led to significant carbon (C) emissions. Agroforestry (AF) is a suitable land-use option for tackling such declines in ecosystem services, including climate change mitigation. However, it is unclear how biomass models, AF practices, and socio-ecological factors determine these roles, which hinders the implementation of climate change mitigation initiatives. This study aimed to estimate the ecosystem C stocks of the studied AF practices in relation to socio-ecological variables in central Ethiopia. Out of 243 AF farms inventoried, 108 were chosen at random from three AF practices to estimate their biomass and soil organic carbon. A total of 432 soil samples were collected from 0–30 and 30–60 cm soil depths; 216 samples were taken for each soil organic carbon fraction (%C) and bulk density computation. The study found that the currently developed allometric equations were the most accurate to estimate biomass C for trees growing in the landscape when compared to previous models. The study found higher overall biomass C in woodlots (165.62 Mg ha-¹) than in homegardens (134.07 Mg ha-¹) and parklands (19.98 Mg ha-¹). Conversely, overall, SOC was higher for homegardens (143.88 Mg ha-¹), but lower for parklands (53.42 Mg ha-¹). The ecosystem C stock was comparable between homegardens (277.95 Mg ha-¹) and woodlots (275.44 Mg ha-¹). The study found that elevation, wealthy levels, AF farm age, and size have a positive and significant (P < 0.05) effect on overall biomass and ecosystem C stocks but non-significant with slope (P > 0.05). Similarly, SOC increased with increasing elevation, AF farm age, and wealthy status but decreased with slope and non-significant with AF farm size. The study also showed that species diversity had a positive (P <0.05) effect on overall biomass C stocks in homegardens. The overall study highlights that AF practices have a great potential to lock up more carbon in biomass and soils; however, these potentials were determined by socioecological variables. Thus, these factors should be considered in management strategies that preserve trees in agricultural landscapes in order to mitigate climate change and support the livelihoods of farmers.

Keywords: agricultural landscape, biomass, climate change, soil organic carbon

Procedia PDF Downloads 35
5829 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment

Authors: Ujjwall Sai Sunder Uppuluri

Abstract:

Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.

Keywords: complex systems, evolutionary theory, group theory, international political economy

Procedia PDF Downloads 118
5828 Simplified Modelling of Visco-Elastic Fluids for Use in Recoil Damping Systems

Authors: Prasad Pokkunuri

Abstract:

Visco-elastic materials combine the stress response properties of both solids and fluids and have found use in a variety of damping applications – both vibrational and acoustic. Defense and automotive applications, in particular, are subject to high impact and shock loading – for example: aircraft landing gear, firearms, and shock absorbers. Field responsive fluids – a class of smart materials – are the preferred choice of energy absorbents because of their controllability. These fluids’ stress response can be controlled by the application of a magnetic or electric field, in a closed loop. Their rheological properties – elasticity, plasticity, and viscosity – can be varied all the way from that of a liquid such as water to a hard solid. This work presents a simplified model to study the impulse response behavior of such fluids for use in recoil damping systems. The well-known Burger’s equation, in conjunction with various visco-elastic constitutive models, is used to represent fluid behavior. The Kelvin-Voigt, Upper Convected Maxwell (UCM), and Oldroyd-B constitutive models are implemented in this study. Using these models in a one-dimensional framework eliminates additional complexities due to geometry, pressure, body forces, and other source terms. Using a finite difference formulation to numerically solve the governing equation(s), the response to an initial impulse is studied. The disturbance is confined within the problem domain with no-inflow, no-outflow boundary conditions, and its decay characteristics studied. Visco-elastic fluids typically involve a time-dependent stress relaxation which gives rise to interesting behavior when subjected to an impulsive load. For particular values of viscous damping and elastic modulus, the fluid settles into a stable oscillatory state, absorbing and releasing energy without much decay. The simplified formulation enables a comprehensive study of different modes of system response, by varying relevant parameters. Using the insights gained from this study, extension to a more detailed multi-dimensional model is considered.

Keywords: Burgers Equation, Impulse Response, Recoil Damping Systems, Visco-elastic Fluids

Procedia PDF Downloads 280
5827 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions

Authors: Chaitanya Varma, Arpan Mehar

Abstract:

The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.

Keywords: highway, mixed traffic flow, modeling, operating speed

Procedia PDF Downloads 453
5826 Forecasting Future Demand for Energy Efficient Vehicles: A Review of Methodological Approaches

Authors: Dimitrios I. Tselentis, Simon P. Washington

Abstract:

Considerable literature has been focused over the last few decades on forecasting the consumer demand of Energy Efficient Vehicles (EEVs). These methodological issues range from how to capture recent purchase decisions in revealed choice studies and how to set up experiments in stated preference (SP) studies, and choice of analysis method for analyzing such data. This paper reviews the plethora of published studies on the field of forecasting demand of EEVs since 1980, and provides a review and annotated bibliography of that literature as it pertains to this particular demand forecasting problem. This detailed review addresses the literature not only to Transportation studies, but specifically to the problem and methodologies around forecasting to the time horizons of planning studies which may represent 10 to 20 year forecasts. The objectives of the paper are to identify where existing gaps in literature exist and to articulate where promising methodologies might guide longer term forecasting. One of the key findings of this review is that there are many common techniques used both in the field of new product demand forecasting and the field of predicting future demand for EEV. Apart from SP and RP methods, some of these new techniques that have emerged in the literature in the last few decades are survey related approaches, product diffusion models, time-series modelling, computational intelligence models and other holistic approaches.

Keywords: demand forecasting, Energy Efficient Vehicles (EEVs), forecasting methodologies review, methodological approaches

Procedia PDF Downloads 473
5825 Competitive Adsorption of Al, Ga and In by Gamma Irradiation Induced Pectin-Acrylamide-(Vinyl Phosphonic Acid) Hydrogel

Authors: Md Murshed Bhuyan, Hirotaka Okabe, Yoshiki Hidaka, Kazuhiro Hara

Abstract:

Pectin-Acrylamide- (Vinyl Phosphonic Acid) Hydrogels were prepared from their blend by using gamma radiation of various doses. It was found that the gel fraction of hydrogel increases with increasing the radiation dose reaches a maximum and then started decreasing with increasing the dose. The optimum radiation dose and the composition of raw materials were determined on the basis of equilibrium swelling which resulted in 20 kGy absorbed dose and 1:2:4 (Pectin:AAm:VPA) composition. Differential scanning calorimetry reveals the gel strength for using them as the adsorbent. The FTIR-spectrum confirmed the grafting/ crosslinking of the monomer on the backbone of pectin chain. The hydrogels were applied in adsorption of Al, Ga, and In from multielement solution where the adsorption capacity order for those three elements was found as – In>Ga>Al. SEM images of hydrogels and metal adsorbed hydrogels indicate the gel network and adherence of the metal ions in the interpenetrating network of the hydrogel which were supported by EDS spectra. The adsorption isotherm models were studied and found that the Langmuir adsorption isotherm model was well fitted with the data. Adsorption data were also fitted to different adsorption kinetic and diffusion models. Desorption of metal adsorbed hydrogels was performed in 5% nitric acid where desorption efficiency was found around 90%.

Keywords: hydrogel, gamma radiation, vinyl phosphonic acid, metal adsorption

Procedia PDF Downloads 142
5824 Evaluating Models Through Feature Selection Methods Using Data Driven Approach

Authors: Shital Patil, Surendra Bhosale

Abstract:

Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.

Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE

Procedia PDF Downloads 104
5823 Fast Algorithm to Determine Initial Tsunami Wave Shape at Source

Authors: Alexander P. Vazhenin, Mikhail M. Lavrentiev, Alexey A. Romanenko, Pavel V. Tatarintsev

Abstract:

One of the problems obstructing effective tsunami modelling is the lack of information about initial wave shape at source. The existing methods; geological, sea radars, satellite images, contain an important part of uncertainty. Therefore, direct measurement of tsunami waves obtained at the deep water bottom peruse recorders is also used. In this paper we propose a new method to reconstruct the initial sea surface displacement at tsunami source by the measured signal (marigram) approximation with the help of linear combination of synthetic marigrams from the selected set of unit sources, calculated in advance. This method has demonstrated good precision and very high performance. The mathematical model and results of numerical tests are here described.

Keywords: numerical tests, orthogonal decomposition, Tsunami Initial Sea Surface Displacement

Procedia PDF Downloads 454