Search results for: Grey prediction model
16934 Structural Strength Potentials of Nigerian Groundnut Husk Ash as Partial Cement Replacement in Mortar
Authors: F. A. Olutoge, O.R. Olulope, M. O. Odelola
Abstract:
This study investigates the strength potentials of groundnut husk ash as partial cement replacement in mortar and also develops a predictive model using Artificial Neural Network. Groundnut husks sourced from Ogbomoso, Nigeria, was sun dried, calcined to ash in a furnace at a controlled temperature of 600⁰ C for a period of 6 hours, and sieved through the 75 microns. The ash was subjected to chemical analysis and setting time test. Fine aggregate (sand) for the mortar was sourced from Ado Ekiti, Nigeria. The cement: GHA constituents were blended in ratios 100:0, 95:5, 90:10, 85:15 and 80:20 %. The sum of SiO₂, Al₂O₃, and Fe₂O₃ content in GHA is 26.98%. The compressive strength for mortars PC, GHA5, GHA10, GHA15, and GHA20 ranged from 6.3-10.2 N/mm² at 7days, 7.5-12.3 N/mm² at 14 days, 9.31-13.7 N/mm² at 28 days, 10.4-16.7 N/mm² at 56days and 13.35- 22.3 N/mm² at 90 days respectively, PC, GHA5 and GHA10 had competitive values up to 28 days, but GHA10 gave the highest values at 56 and 90 days while GHA20 had the lowest values at all ages due to dilution effect. Flexural strengths values at 28 days ranged from 1.08 to 1.87 N/mm² and increased to a range of 1.53-4.10 N/mm² at 90 days. The ANN model gave good prediction for compressive strength of the mortars. This study has shown that groundnut husk ash as partial cement replacement improves the strength properties of mortar.Keywords: compressive strength, groundnut husk ash, mortar, pozzolanic index
Procedia PDF Downloads 15516933 UBCSAND Model Calibration for Generic Liquefaction Triggering Curves
Authors: Jui-Ching Chou
Abstract:
Numerical simulation is a popular method used to evaluate the effects of soil liquefaction on a structure or the effectiveness of a mitigation plan. Many constitutive models (UBCSAND model, PM4 model, SANISAND model, etc.) were presented to model the liquefaction phenomenon. In general, inputs of a constitutive model need to be calibrated against the soil cyclic resistance before being applied to the numerical simulation model. Then, simulation results can be compared with results from simplified liquefaction potential assessing methods. In this article, inputs of the UBCSAND model, a simple elastic-plastic stress-strain model, are calibrated against several popular generic liquefaction triggering curves of simplified liquefaction potential assessing methods via FLAC program. Calibrated inputs can provide engineers to perform a preliminary evaluation of an existing structure or a new design project.Keywords: calibration, liquefaction, numerical simulation, UBCSAND Model
Procedia PDF Downloads 17416932 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology
Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon
Abstract:
There is not much effective guideline on development of design parameters selection on springback for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for springback in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in U-channel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on springback of flange angle (β2) and wall opening angle (β1), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the springback behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for springback was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental valuesKeywords: advance high strength steel, u-channel process, springback, design of experiment, optimization, response surface methodology (rsm)
Procedia PDF Downloads 54216931 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.Keywords: fire prediction, drone, smoke toxicity, analyser, fire management
Procedia PDF Downloads 9016930 Floodplain Modeling of River Jhelum Using HEC-RAS: A Case Study
Authors: Kashif Hassan, M.A. Ahanger
Abstract:
Floods have become more frequent and severe due to effects of global climate change and human alterations of the natural environment. Flood prediction/ forecasting and control is one of the greatest challenges facing the world today. The forecast of floods is achieved by the use of hydraulic models such as HEC-RAS, which are designed to simulate flow processes of the surface water. Extreme flood events in river Jhelum , lasting from a day to few are a major disaster in the State of Jammu and Kashmir, India. In the present study HEC-RAS model was applied to two different reaches of river Jhelum in order to estimate the flood levels corresponding to 25, 50 and 100 year return period flood events at important locations and to deduce flood vulnerability of important areas and structures. The flow rates for the two reaches were derived from flood-frequency analysis of 50 years of historic peak flow data. Manning's roughness coefficient n was selected using detailed analysis. Rating Curves were also generated to serve as base for determining the boundary conditions. Calibration and Validation procedures were applied in order to ensure the reliability of the model. Sensitivity analysis was also performed in order to ensure the accuracy of Manning's n in generating water surface profiles.Keywords: flood plain, HEC-RAS, Jhelum, return period
Procedia PDF Downloads 42716929 Analytical Authentication of Butter Using Fourier Transform Infrared Spectroscopy Coupled with Chemometrics
Authors: M. Bodner, M. Scampicchio
Abstract:
Fourier Transform Infrared (FT-IR) spectroscopy coupled with chemometrics was used to distinguish between butter samples and non-butter samples. Further, quantification of the content of margarine in adulterated butter samples was investigated. Fingerprinting region (1400-800 cm–1) was used to develop unsupervised pattern recognition (Principal Component Analysis, PCA), supervised modeling (Soft Independent Modelling by Class Analogy, SIMCA), classification (Partial Least Squares Discriminant Analysis, PLS-DA) and regression (Partial Least Squares Regression, PLS-R) models. PCA of the fingerprinting region shows a clustering of the two sample types. All samples were classified in their rightful class by SIMCA approach; however, nine adulterated samples (between 1% and 30% w/w of margarine) were classified as belonging both at the butter class and at the non-butter one. In the two-class PLS-DA model’s (R2 = 0.73, RMSEP, Root Mean Square Error of Prediction = 0.26% w/w) sensitivity was 71.4% and Positive Predictive Value (PPV) 100%. Its threshold was calculated at 7% w/w of margarine in adulterated butter samples. Finally, PLS-R model (R2 = 0.84, RMSEP = 16.54%) was developed. PLS-DA was a suitable classification tool and PLS-R a proper quantification approach. Results demonstrate that FT-IR spectroscopy combined with PLS-R can be used as a rapid, simple and safe method to identify pure butter samples from adulterated ones and to determine the grade of adulteration of margarine in butter samples.Keywords: adulterated butter, margarine, PCA, PLS-DA, PLS-R, SIMCA
Procedia PDF Downloads 14716928 Two-Sided Information Dissemination in Takeovers: Disclosure and Media
Authors: Eda Orhun
Abstract:
Purpose: This paper analyzes a target firm’s decision to voluntarily disclose information during a takeover event and the effect of such disclosures on the outcome of the takeover. Such voluntary disclosures especially in the form of earnings forecasts made around takeover events may affect shareholders’ decisions about the target firm’s value and in return takeover result. This study aims to shed light on this question. Design/methodology/approach: The paper tries to understand the role of voluntary disclosures by target firms during a takeover event in the likelihood of takeover success both theoretically and empirically. A game-theoretical model is set up to analyze the voluntary disclosure decision of a target firm to inform the shareholders about its real worth. The empirical implication of model is tested by employing binary outcome models where the disclosure variable is obtained by identifying the target firms in the sample that provide positive news by issuing increasing management earnings forecasts. Findings: The model predicts that a voluntary disclosure of positive information by the target decreases the likelihood that the takeover succeeds. The empirical analysis confirms this prediction by showing that positive earnings forecasts by target firms during takeover events increase the probability of takeover failure. Overall, it is shown that information dissemination through voluntary disclosures by target firms is an important factor affecting takeover outcomes. Originality/Value: This study is the first to the author's knowledge that studies the impact of voluntary disclosures by the target firm during a takeover event on the likelihood of takeover success. The results contribute to information economics, corporate finance and M&As literatures.Keywords: takeovers, target firm, voluntary disclosures, earnings forecasts, takeover success
Procedia PDF Downloads 32016927 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39316926 Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models
Authors: Maria C. Mariani, Md Al Masum Bhuiyan, Osei K. Tweneboah, Hector G. Huizar
Abstract:
This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.Keywords: Augmented Dickey Fuller Test, geophysical time series, maximum likelihood estimation, stochastic volatility model
Procedia PDF Downloads 31516925 Computational Fluid Dynamics (CFD) Modeling of Local with a Hot Temperature in Sahara
Authors: Selma Bouasria, Mahi Abdelkader, Abbès Azzi, Herouz Keltoum
Abstract:
This paper reports concept was used into the computational fluid dynamics (CFD) code cfx through user-defined functions to assess ventilation efficiency inside (forced-ventilation local). CFX is a simulation tool which uses powerful computer and applied mathematics, to model fluid flow situations for the prediction of heat, mass and momentum transfer and optimal design in various heat transfer and fluid flow processes to evaluate thermal comfort in a room ventilated (highly-glazed). The quality of the solutions obtained from CFD simulations is an effective tool for predicting the behavior and performance indoor thermo-aéraulique comfort.Keywords: ventilation, thermal comfort, CFD, indoor environment, solar air heater
Procedia PDF Downloads 63516924 Small Scale Mobile Robot Auto-Parking Using Deep Learning, Image Processing, and Kinematics-Based Target Prediction
Authors: Mingxin Li, Liya Ni
Abstract:
Autonomous parking is a valuable feature applicable to many robotics applications such as tour guide robots, UV sanitizing robots, food delivery robots, and warehouse robots. With auto-parking, the robot will be able to park at the charging zone and charge itself without human intervention. As compared to self-driving vehicles, auto-parking is more challenging for a small-scale mobile robot only equipped with a front camera due to the camera view limited by the robot’s height and the narrow Field of View (FOV) of the inexpensive camera. In this research, auto-parking of a small-scale mobile robot with a front camera only was achieved in a four-step process: Firstly, transfer learning was performed on the AlexNet, a popular pre-trained convolutional neural network (CNN). It was trained with 150 pictures of empty parking slots and 150 pictures of occupied parking slots from the view angle of a small-scale robot. The dataset of images was divided into a group of 70% images for training and the remaining 30% images for validation. An average success rate of 95% was achieved. Secondly, the image of detected empty parking space was processed with edge detection followed by the computation of parametric representations of the boundary lines using the Hough Transform algorithm. Thirdly, the positions of the entrance point and center of available parking space were predicted based on the robot kinematic model as the robot was driving closer to the parking space because the boundary lines disappeared partially or completely from its camera view due to the height and FOV limitations. The robot used its wheel speeds to compute the positions of the parking space with respect to its changing local frame as it moved along, based on its kinematic model. Lastly, the predicted entrance point of the parking space was used as the reference for the motion control of the robot until it was replaced by the actual center when it became visible again by the robot. The linear and angular velocities of the robot chassis center were computed based on the error between the current chassis center and the reference point. Then the left and right wheel speeds were obtained using inverse kinematics and sent to the motor driver. The above-mentioned four subtasks were all successfully accomplished, with the transformed learning, image processing, and target prediction performed in MATLAB, while the motion control and image capture conducted on a self-built small scale differential drive mobile robot. The small-scale robot employs a Raspberry Pi board, a Pi camera, an L298N dual H-bridge motor driver, a USB power module, a power bank, four wheels, and a chassis. Future research includes three areas: the integration of all four subsystems into one hardware/software platform with the upgrade to an Nvidia Jetson Nano board that provides superior performance for deep learning and image processing; more testing and validation on the identification of available parking space and its boundary lines; improvement of performance after the hardware/software integration is completed.Keywords: autonomous parking, convolutional neural network, image processing, kinematics-based prediction, transfer learning
Procedia PDF Downloads 13316923 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description
Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu
Abstract:
Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.Keywords: runoff, roughness coefficient, PAR, WRM model
Procedia PDF Downloads 37816922 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers
Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist
Abstract:
Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden
Procedia PDF Downloads 11216921 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 49816920 Study of Slum Redevelopment Initiatives for Dharavi Slum, Mumbai and Its Effectiveness in Implementation in Other Cities
Authors: Anurag Jha
Abstract:
Dharavi is the largest slum in Asia, for which many redevelopment projects have been put forth, to improve the housing conditions of the locals. And yet, these projects are met with much-unexpected resistance from the locals. The research analyses the why and the how of the resistances these projects face and analyses these programs and points out the flaws and benefits of such projects, by predicting its impact on the regulars of Dharavi. The research aims to analyze various aspects of Dharavi, which affect its socio-cultural backdrops, such as its history, and eventual growth into a mega slum. Through various surveys, the research aims to analyze the life of a slum dweller, the street life, and the effect of such settlement on the urban fabric. Various development projects such as Dharavi Museum Movement, are analyzed, and a feasibility and efficiency analysis of the proposals for redevelopment of Dharavi Slums has been theorized. Flaws and benefits of such projects, by predicting its impact on the regulars of Dharavi has been the major approach to the research. Also, prediction the implementation of these projects in another prominent slum area, Anand Nagar, Bhopal, with the use of generated hypothetical model has been done. The research provides a basic framework for a comparative analysis of various redevelopment projects and the effect of implementation of such projects on the general populace. Secondly, it proposes a hypothetical model for feasibility of such projects in certain slum areas.Keywords: Anand Nagar, Bhopal slums, Dharavi, slum redevelopment programmes
Procedia PDF Downloads 33116919 Study the Influence of the Type of Cast Iron Chips on the Quality of Briquettes Obtained with Controlled Impact
Authors: Dimitar N. Karastoianov, Stanislav D. Gyoshev, Todor N. Penchev
Abstract:
Preparation of briquettes of metal chips with good density and quality is of great importance for the efficiency of this process. In this paper are presented the results of impact briquetting of grey cast iron chips with rectangular shape and dimensions 15x25x1 mm. Density and quality of briquettes of these chips are compared with those obtained in another work of the authors using cast iron chips with smaller sizes. It has been found that by using a rectangular chips with a large size are produced briquettes with a very low density and poor quality. From the photographs taken by X-ray tomography, it is clear that the reason for this is the orientation of the chip in the peripheral wall of the briquettes, which does not allow of the air to escape from it. It was concluded that in order to obtain briquettes of cast iron chips with a large size, these chips must first be ground, for example in a small ball mill.Keywords: briquetting, chips, impact, rocket engine
Procedia PDF Downloads 52516918 Bioinformatic Design of a Non-toxic Modified Adjuvant from the Native A1 Structure of Cholera Toxin with Membrane Synthetic Peptide of Naegleria fowleri
Authors: Frida Carrillo Morales, Maria Maricela Carrasco Yépez, Saúl Rojas Hernández
Abstract:
Naegleria fowleri is the causative agent of primary amebic meningoencephalitis, this disease is acute and fulminant that affects humans. It has been reported that despite the existence of therapeutic options against this disease, its mortality rate is 97%. Therefore, the need arises to have vaccines that confer protection against this disease and, in addition to developing adjuvants to enhance the immune response. In this regard, in our work group, we obtained a peptide designed from the membrane protein MP2CL5 of Naegleria fowleri called Smp145 that was shown to be immunogenic; however, it would be of great importance to enhance its immunological response, being able to co-administer it with a non-toxic adjuvant. Therefore, the objective of this work was to carry out the bioinformatic design of a peptide of the Naegleria fowleri membrane protein MP2CL5 conjugated with a non-toxic modified adjuvant from the native A1 structure of Cholera Toxin. For which different bioinformatics tools were used to obtain a model with a modification in amino acid 61 of the A1 subunit of the CT (CTA1), to which the Smp145 peptide was added and both molecules were joined with a 13-glycine linker. As for the results obtained, the modification in CTA1 bound to the peptide produces a reduction in the toxicity of the molecule in in silico experiments, likewise, the prediction in the binding of Smp145 to the receptor of B cells suggests that the molecule is directed in specifically to the BCR receptor, decreasing its native enzymatic activity. The stereochemical evaluation showed that the generated model has a high number of adequately predicted residues. In the ERRAT test, the confidence with which it is possible to reject regions that exceed the error values was evaluated, in the generated model, a high score was obtained, which determines that the model has a good structural resolution. Therefore, the design of the conjugated peptide in this work will allow us to proceed with its chemical synthesis and subsequently be able to use it in the mouse meningitis protection model caused by N. fowleri.Keywords: immunology, vaccines, pathogens, infectious disease
Procedia PDF Downloads 9216917 Structural Equation Modeling Semiparametric Truncated Spline Using Simulation Data
Authors: Adji Achmad Rinaldo Fernandes
Abstract:
SEM analysis is a complex multivariate analysis because it involves a number of exogenous and endogenous variables that are interconnected to form a model. The measurement model is divided into two, namely, the reflective model (reflecting) and the formative model (forming). Before carrying out further tests on SEM, there are assumptions that must be met, namely the linearity assumption, to determine the form of the relationship. There are three modeling approaches to path analysis, including parametric, nonparametric and semiparametric approaches. The aim of this research is to develop semiparametric SEM and obtain the best model. The data used in the research is secondary data as the basis for the process of obtaining simulation data. Simulation data was generated with various sample sizes of 100, 300, and 500. In the semiparametric SEM analysis, the form of the relationship studied was determined, namely linear and quadratic and determined one and two knot points with various levels of error variance (EV=0.5; 1; 5). There are three levels of closeness of relationship for the analysis process in the measurement model consisting of low (0.1-0.3), medium (0.4-0.6) and high (0.7-0.9) levels of closeness. The best model lies in the form of the relationship X1Y1 linear, and. In the measurement model, a characteristic of the reflective model is obtained, namely that the higher the closeness of the relationship, the better the model obtained. The originality of this research is the development of semiparametric SEM, which has not been widely studied by researchers.Keywords: semiparametric SEM, measurement model, structural model, reflective model, formative model
Procedia PDF Downloads 4316916 Contribution in Fatigue Life Prediction of Composite Material
Authors: Mostefa Bendouba, Djebli Abdelkader, Abdelkrim Aid, Mohamed Benguediab
Abstract:
The damage evolution mechanism is one of the important focuses of fatigue behaviour investigation of composite materials and also is the foundation to predict fatigue life of composite structures for engineering application. This paper is dedicated to a damage investigation under two block loading cycle fatigue conditions submitted to composite material. The loading sequence effect and the influence of the cycle ratio of the first stage on the cumulative fatigue life were studied herein. Two loading sequences, i.e., high-to-low and low-to-high cases are considered in this paper. The proposed damage indicator is connected cycle by cycle to the S-N curve and the experimental results are in agreement with model expectations. Some experimental researches are used to validate this proposition.Keywords: fatigue, damage acumulation, composite, evolution
Procedia PDF Downloads 50216915 Prediction of Compressive Strength Using Artificial Neural Network
Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal
Abstract:
Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-Destructive Techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.Keywords: rebound, ultra-sonic pulse, penetration, ANN, NDT, regression
Procedia PDF Downloads 42816914 Prediction of All-Beta Protein Secondary Structure Using Garnier-Osguthorpe-Robson Method
Authors: K. Tejasri, K. Suvarna Vani, S. Prathyusha, S. Ramya
Abstract:
Proteins are chained sequences of amino acids which are brought together by the peptide bonds. Many varying formations of the chains are possible due to multiple combinations of amino acids and rotation in numerous positions along the chain. Protein structure prediction is one of the crucial goals worked towards by the members of bioinformatics and theoretical chemistry backgrounds. Among the four different structure levels in proteins, we emphasize mainly the secondary level structure. Generally, the secondary protein basically comprises alpha-helix and beta-sheets. Multi-class classification problem of data with disparity is truly a challenge to overcome and has to be addressed for the beta strands. Imbalanced data distribution constitutes a couple of the classes of data having very limited training samples collated with other classes. The secondary structure data is extracted from the protein primary sequence, and the beta-strands are predicted using suitable machine learning algorithms.Keywords: proteins, secondary structure elements, beta-sheets, beta-strands, alpha-helices, machine learning algorithms
Procedia PDF Downloads 9416913 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria
Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov
Abstract:
This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model
Procedia PDF Downloads 6616912 Forecasting Optimal Production Program Using Profitability Optimization by Genetic Algorithm and Neural Network
Authors: Galal H. Senussi, Muamar Benisa, Sanja Vasin
Abstract:
In our business field today, one of the most important issues for any enterprises is cost minimization and profit maximization. Second issue is how to develop a strong and capable model that is able to give us desired forecasting of these two issues. Many researches deal with these issues using different methods. In this study, we developed a model for multi-criteria production program optimization, integrated with Artificial Neural Network. The prediction of the production cost and profit per unit of a product, dealing with two obverse functions at same time can be extremely difficult, especially if there is a great amount of conflict information about production parameters. Feed-Forward Neural Networks are suitable for generalization, which means that the network will generate a proper output as a result to input it has never seen. Therefore, with small set of examples the network will adjust its weight coefficients so the input will generate a proper output. This essential characteristic is of the most important abilities enabling this network to be used in variety of problems spreading from engineering to finance etc. From our results as we will see later, Feed-Forward Neural Networks has a strong ability and capability to map inputs into desired outputs.Keywords: project profitability, multi-objective optimization, genetic algorithm, Pareto set, neural networks
Procedia PDF Downloads 44616911 Outcome of Using Penpat Pinyowattanasilp Equation for Prediction of 24-Hour Uptake, First and Second Therapeutic Doses Calculation in Graves’ Disease Patient
Authors: Piyarat Parklug, Busaba Supawattanaobodee, Penpat Pinyowattanasilp
Abstract:
The radioactive iodine thyroid uptake (RAIU) has been widely used to differentiate the cause of thyrotoxicosis and treatment. Twenty-four hours RAIU is routinely used to calculate the dose of radioactive iodine (RAI) therapy; however, 2 days protocol is required. This study aims to evaluate the modification of Penpat Pinyowattanasilp equation application by the exclusion of outlier data, 3 hours RAIU less than 20% and more than 80%, to improve prediction of 24-hour uptake. The equation is predicted 24 hours RAIU (P24RAIU) = 32.5+0.702 (3 hours RAIU). Then calculating separation first and second therapeutic doses in Graves’ disease patients. Methods; This study was a retrospective study at Faculty of Medicine Vajira Hospital in Bangkok, Thailand. Inclusion were Graves’ disease patients who visited RAI clinic between January 2014-March 2019. We divided subjects into 2 groups according to first and second therapeutic doses. Results; Our study had a total of 151 patients. The study was done in 115 patients with first RAI dose and 36 patients with second RAI dose. The P24RAIU are highly correlated with actual 24-hour RAIU in first and second therapeutic doses (r = 0.913, 95% CI = 0.876 to 0.939 and r = 0.806, 95% CI = 0.649 to 0.897). Bland-Altman plot shows that mean differences between predictive and actual 24 hours RAI in the first dose and second dose were 2.14% (95%CI 0.83-3.46) and 1.37% (95%CI -1.41-4.14). The mean first actual and predictive therapeutic doses are 8.33 ± 4.93 and 7.38 ± 3.43 milliCuries (mCi) respectively. The mean second actual and predictive therapeutic doses are 6.51 ± 3.96 and 6.01 ± 3.11 mCi respectively. The predictive therapeutic doses are highly correlated with the actual dose in first and second therapeutic doses (r = 0.907, 95% CI = 0.868 to 0.935 and r = 0.953, 95% CI = 0.909 to 0.976). Bland-Altman plot shows that mean difference between predictive and actual P24RAIU in the first dose and second dose were less than 1 mCi (-0.94 and -0.5 mCi). This modification equation application is simply used in clinical practice especially patient with 3 hours RAIU in range of 20-80% in a Thai population. Before use, this equation for other population should be tested for the correlation.Keywords: equation, Graves’disease, prediction, 24-hour uptake
Procedia PDF Downloads 13916910 The Prediction Mechanism of M. cajuputi Extract from Lampung-Indonesia, as an Anti-Inflammatory Agent for COVID-19 by NFκβ Pathway
Authors: Agustyas Tjiptaningrum, Intanri Kurniati, Fadilah Fadilah, Linda Erlina, Tiwuk Susantiningsih
Abstract:
Coronavirus disease-19 (COVID-19) is still one of the health problems. It can be a severe condition that is caused by a cytokine storm. In a cytokine storm, several proinflammatory cytokines are released massively. It destroys epithelial cells, and subsequently, it can cause death. The anti-inflammatory agent can be used to decrease the number of severe Covid-19 conditions. Melaleuca cajuputi is a plant that has antiviral, antibiotic, antioxidant, and anti-inflammatory activities. This study was carried out to analyze the prediction mechanism of the M. cajuputi extract from Lampung, Indonesia, as an anti-inflammatory agent for COVID-19. This study constructed a database of protein host target that was involved in the inflammation process of COVID-19 using data retrieval from GeneCards with the keyword “SARS-CoV2”, “inflammation,” “cytokine storm,” and “acute respiratory distress syndrome.” Subsequent protein-protein interaction was generated by using Cytoscape version 3.9.1. It can predict the significant target protein. Then the analysis of the Gene Ontology (GO) and KEGG pathways was conducted to generate the genes and components that play a role in COVID-19. The result of this study was 30 nodes representing significant proteins, namely NF-κβ, IL-6, IL-6R, IL-2RA, IL-2, IFN2, C3, TRAF6, IFNAR1, and DOX58. From the KEGG pathway, we obtained the result that NF-κβ has a role in the production of proinflammatory cytokines, which play a role in the COVID-19 cytokine storm. It is an important factor for macrophage transcription; therefore, it will induce inflammatory gene expression that encodes proinflammatory cytokines such as IL-6, TNF-α, and IL-1β. In conclusion, the blocking of NF-κβ is the prediction mechanism of the M. cajuputi extract as an anti-inflammation agent for COVID-19.Keywords: antiinflammation, COVID-19, cytokine storm, NF-κβ, M. cajuputi
Procedia PDF Downloads 8816909 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 12216908 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.Keywords: agglomerate, blast furnace, permeability, softening-melting
Procedia PDF Downloads 25316907 Use of Real Time Ultrasound for the Prediction of Carcass Composition in Serrana Goats
Authors: Antonio Monteiro, Jorge Azevedo, Severiano Silva, Alfredo Teixeira
Abstract:
The objective of this study was to compare the carcass and in vivo real-time ultrasound measurements (RTU) and their capacity to predict the composition of Serrana goats up to 40% of maturity. Twenty one females (11.1 ± 3.97 kg) and Twenty one males (15.6 ± 5.38 kg) were utilized to made in vivo measurements with a 5 MHz probe (ALOKA 500V scanner) at the 9th-10th, 10th-11th thoracic vertebrae (uT910 and uT1011, respectively), at the 1st- 2nd, 3rd-4th, and 4th-5th lumbar vertebrae (uL12, ul34 and uL45, respectively) and also at the 3rd-4th sternebrae (EEST). It was recorded the images of RTU measurements of Longissimus thoracis et lumborum muscle (LTL) depth (EM), width (LM), perimeter (PM), area (AM) and subcutaneous fat thickness (SFD) above the LTL, as well as the depth of tissues of the sternum (EEST) between the 3rd-4th sternebrae. All RTU images were analyzed using the ImageJ software. After slaughter, the carcasses were stored at 4 ºC for 24 h. After this period the carcasses were divided and the left half was entirely dissected into muscle, dissected fat (subcutaneous fat plus intermuscular fat) and bone. Prior to the dissection measurements equivalent to those obtained in vivo with RTU were recorded. Using the Statistica 5, correlation and regression analyses were performed. The prediction of carcass composition was achieved by stepwise regression procedure, with live weight and RTU measurements with and without transformation of variables to the same dimension. The RTU and carcass measurements, except for SFD measurements, showed high correlation (r > 0.60, P < 0.001). The RTU measurements and the live weight, showed ability to predict carcass composition on muscle (R2 = 0.99, P < 0.001), subcutaneous fat (R2 = 0.41, P < 0.001), intermuscular fat (R2 = 0.84, P < 0.001), dissected fat (R2 = 0.71, P < 0.001) and bone (R2 = 0.94, P < 0.001). The transformation of variables allowed a slight increase of precision, but with the increase in the number of variables, with the exception of subcutaneous fat prediction. In vivo measurements by RTU can be applied to predict kid goat carcass composition, from 5 measurements of RTU and the live weight.Keywords: carcass, goats, real time, ultrasound
Procedia PDF Downloads 26116906 Towards a Measurement-Based E-Government Portals Maturity Model
Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri
Abstract:
The e-government emerging concept transforms the way in which the citizens are dealing with their governments. Thus, the citizens can execute the intended services online anytime and anywhere. This results in great benefits for both the governments (reduces the number of officers) and the citizens (more flexibility and time saving). Therefore, building a maturity model to assess the e-government portals becomes desired to help in the improvement process of such portals. This paper aims at proposing an e-government maturity model based on the measurement of the best practices’ presence. The main benefit of such maturity model is to provide a way to rank an e-government portal based on the used best practices, and also giving a set of recommendations to go to the higher stage in the maturity model.Keywords: best practices, e-government portal, maturity model, quality model
Procedia PDF Downloads 33816905 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data
Authors: Ruchika Malhotra, Megha Khanna
Abstract:
The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics
Procedia PDF Downloads 418