Search results for: predicting models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7254

Search results for: predicting models

3204 [Keynote Talk]: Machining Parameters Optimization with Genetic Algorithm

Authors: Dejan Tanikić, Miodrag Manić, Jelena Đoković, Saša Kalinović

Abstract:

This paper deals with the determination of the optimum machining parameters, according to the measured and modelled data of the cutting temperature and surface roughness, during the turning of the AISI 4140 steel. The high cutting temperatures are unwanted occurences in the metal cutting process. They impact negatively on the quality of the machined part. The machining experiments were performed using different cutting regimes (cutting speed, feed rate and depth of cut), with different values of the workpiece hardness, which causes different values of the measured cutting temperature as well as the measured surface roughness. The temperature and surface roughness data were modelled after that using Response Surface Methodology (RSM). The obtained RSM models are used in the process of optimization of the cutting regimes using the Genetic Algorithms (GA) tool, which enables the metal cutting process in the optimum conditions.

Keywords: genetic algorithms, machining parameters, response surface methodology, turning process

Procedia PDF Downloads 169
3203 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 56
3202 FZP Design Considering Spherical Wave Incidence

Authors: Sergio Pérez-López, Daniel Tarrazó-Serrano, José M. Fuster, Pilar Candelas, Constanza Rubio

Abstract:

Fresnel Zone Plates (FZPs) are widely used in many areas, such as optics, microwaves or acoustics. On the design of FZPs, plane wave incidence is typically considered, but that is not usually the case in ultrasounds, especially in applications where a piston emitter is placed at a certain distance from the lens. In these cases, having control of the focal distance is very important, and with the usual Fresnel equation a focal displacement from the theoretical distance is observed due to the plane wave supposition. In this work, a comparison between FZP with plane wave incidence design and FZP with point source design in the case of piston emitter is presented. Influence of the main parameters of the piston in the final focalization profile has been studied. Numerical models and experimental results are shown, and they prove that when spherical wave incidence is considered for the piston case, it is possible to have a fine control of the focal distance in comparison with the classical design method.

Keywords: focusing, Fresnel zone plates, FZP, ultrasound

Procedia PDF Downloads 229
3201 Natural Emergence of a Core Structure in Networks via Clique Percolation

Authors: A. Melka, N. Slater, A. Mualem, Y. Louzoun

Abstract:

Networks are often presented as containing a “core” and a “periphery.” The existence of a core suggests that some vertices are central and form the skeleton of the network, to which all other vertices are connected. An alternative view of graphs is through communities. Multiple measures have been proposed for dense communities in graphs, the most classical being k-cliques, k-cores, and k-plexes, all presenting groups of tightly connected vertices. We here show that the edge number thresholds for such communities to emerge and for their percolation into a single dense connectivity component are very close, in all networks studied. These percolating cliques produce a natural core and periphery structure. This result is generic and is tested in configuration models and in real-world networks. This is also true for k-cores and k-plexes. Thus, the emergence of this connectedness among communities leading to a core is not dependent on some specific mechanism but a direct result of the natural percolation of dense communities.

Keywords: cliques, core structure, percolation, phase transition

Procedia PDF Downloads 151
3200 Prediction of Outcome after Endovascular Thrombectomy for Anterior and Posterior Ischemic Stroke: ASPECTS on CT

Authors: Angela T. H. Kwan, Wenjun Liang, Jack Wellington, Mohammad Mofatteh, Thanh N. Nguyen, Pingzhong Fu, Juanmei Chen, Zile Yan, Weijuan Wu, Yongting Zhou, Shuiquan Yang, Sijie Zhou, Yimin Chen

Abstract:

Background: Endovascular Therapy (EVT)—in the form of mechanical thrombectomy—following intravenous thrombolysis is the standard gold treatment for patients with acute ischemic stroke (AIS) due to large vessel occlusion (LVO). It is well established that an ASPECTS ≥ 7 is associated with an increased likelihood of positive post-EVT outcomes, as compared to an ASPECTS < 7. There is also prognostic utility in coupling posterior circulation ASPECTS (pc-ASPECTS) with magnetic resonance imaging for evaluating the post-EVT functional outcome. However, the value of pc-ASPECTS applied to CT must be explored further to determine its usefulness in predicting functional outcomes following EVT. Objective: In this study, we aimed to determine whether pc-ASPECTS on CT can predict post-EVT functional outcomes among patients with AIS due to LVO. Methods: A total of 247 consecutive patients aged 18 and over receiving EVT for LVO-related AIS were recruited into a prospective database. The data were retrospectively analyzed between March 2019 to February 2022 from two comprehensive tertiary care stroke centers: Foshan Sanshui District People’s Hospital and First People's Hospital of Foshan in China. Patient parameters included EVT within 24hrs of symptom onset, premorbid modified Rankin Scale (mRS) ≤ 2, presence of distal and terminal cerebral blood vessel occlusion, and subsequent 24–72-hour post-stroke onset CT scan. Univariate comparisons were performed using the Fisher exact test or χ2 test for categorical variables and the Mann–Whitney U test for continuous variables. A p-value of ≤ 0.05 was statistically significant. Results: A total of 247 patients met the inclusion criteria; however, 3 were excluded due to the absence of post-CTs and 8 for pre-EVT ASPECTS < 7. Overall, 236 individuals were examined: 196 anterior circulation ischemic strokes and 40 posterior strokes of basilar artery occlusion. We found that both baseline post- and pc-ASPECTS ≥ 7 serve as strong positive markers of favorable outcomes at 90 days post-EVT. Moreover, lower rates of inpatient mortality/hospice discharge, 90-day mortality, and 90-day poor outcome were observed. Moreover, patients in the post-ASPECTS ≥ 7 anterior circulation group had shorter door-to-recanalization time (DRT), puncture-to-recanalization time (PRT), and last known normal-to-puncture-time (LKNPT). Conclusion: Patients of anterior and posterior circulation ischemic strokes with baseline post- and pc-ASPECTS ≥ 7 may benefit from EVT.

Keywords: endovascular therapy, thrombectomy, large vessel occlusion, cerebral ischemic stroke, ASPECTS

Procedia PDF Downloads 87
3199 Theoretical Calculation of Wingtip Devices for Agricultural Aircraft

Authors: Hashim Bashir

Abstract:

The Vortex generated at the edges of the wing of an Aircraft are called the Wing Tip Vortex. The Wing Tip Vortices are associated with induced drag. The induced drag is responsible for nearly 50% of aircraft total drag and can be reduced through modifications to the wing tip. Some models displace wingtips vortices outwards diminishing the induced drag. Concerning agricultural aircrafts, wing tip vortex position is really important, while spreading products over a plantation. In this work, theoretical calculations were made in order to study the influence in aerodynamic characteristics and vortex position, over Sudanese agricultural aircraft, by the following types of wing tips: delta tip, winglet and down curved. The down curved tip was better for total drag reduction, but not good referring to vortex position. The delta tip gave moderate improvement on aerodynamic characteristic and on vortex position. The winglet had a better vortex position and lift increment, but caused an undesirable result referring to the wing root bending moment. However, winglet showed better development potential for agricultural aircraft.

Keywords: wing tip device, wing tip vortice, agricultural aircaft, winglet

Procedia PDF Downloads 292
3198 Microwave-Assisted Inorganic Salt Pretreatment of Sugarcane Leaf Waste

Authors: Preshanthan Moodley, E. B. Gueguim-Kana

Abstract:

The objective of this study was to develop a method to pretreat sugarcane leaf waste using microwave-assisted (MA) inorganic salt. The effects of process parameters of salt concentration, microwave power intensity and pretreatment time on reducing sugar yield from enzymatically hydrolysed sugarcane leaf waste were investigated. Pretreatment models based on MA-NaCl, MA-ZnCl2 and MA-FeCl3 were developed. Maximum reducing sugar yield of 0.406 g/g was obtained with 2 M FeCl3 at 700W for 3.5 min. Scanning electron microscopy (SEM) and Fourier Transform Infrared analysis (FTIR) showed major changes in lignocellulosic structure after MA-FeCl3 pretreatment with 71.5 % hemicellulose solubilization. This pretreatment was further assessed on sorghum leaves and Napier grass under optimal MA-FeCl3 conditions. A 2 fold and 3.1-fold increase in sugar yield respectively were observed compared to previous reports. This pretreatment was highly effective for enhancing enzymatic saccharification of lignocellulosic biomass.

Keywords: acid, pretreatment, salt, sugarcane leaves

Procedia PDF Downloads 434
3197 Fractal Analysis of Polyacrylamide-Graphene Oxide Composite Gels

Authors: Gülşen Akın Evingür, Önder Pekcan

Abstract:

The fractal analysis is a bridge between the microstructure and macroscopic properties of gels. Fractal structure is usually provided to define the complexity of crosslinked molecules. The complexity in gel systems is described by the fractal dimension (Df). In this study, polyacrylamide- graphene oxide (GO) composite gels were prepared by free radical crosslinking copolymerization. The fractal analysis of polyacrylamide- graphene oxide (GO) composite gels were analyzed in various GO contents during gelation and were investigated by using Fluorescence Technique. The analysis was applied to estimate Df s of the composite gels. Fractal dimension of the polymer composite gels were estimated based on the power law exponent values using scaling models. In addition, here we aimed to present the geometrical distribution of GO during gelation. And we observed that as gelation proceeded GO plates first organized themselves into 3D percolation cluster with Df=2.52, then goes to diffusion limited clusters with Df =1.4 and then lines up to Von Koch curve with random interval with Df=1.14. Here, our goal is to try to interpret the low conductivity and/or broad forbidden gap of GO doped PAAm gels, by the distribution of GO in the final form of the produced gel.

Keywords: composite gels, fluorescence, fractal, scaling

Procedia PDF Downloads 287
3196 Thermodynamic Properties of Binary Mixtures of 1, 2-Dichloroethane with Some Polyethers: DISQUAC Calculations Compared with Dortmund UNIFAC Results

Authors: F. Amireche, I. Mokbel, J. Jose, B. F. Belaribi

Abstract:

The experimental vapour-liquid equilibria (VLE) at isothermal conditions and excess molar Gibbs energies GE are carried out for the three binary mixtures: 1, 2- dichloroethane + ethylene glycol dimethyl ether, + diethylene glycol dimethyl ether or + diethylene glycol diethyl ether, at ten temperatures ranging from 273 to 353.15 K. A good static device was employed for these measurements. The VLE data were reduced using the Redlich-Kister equation by taking into consideration the vapour pressure non-ideality in terms of the second molar virial coefficient. The experimental data were compared to the results predicted with the DISQUAC and Dortmund UNIFAC group contribution models for the total pressures P, the excess molar Gibbs energies GE and the excess molar enthalpies HE.

Keywords: Disquac model, Dortmund UNIFAC model, 1, 2- dichloroethane, excess molar Gibbs energies GE, polyethers, VLE

Procedia PDF Downloads 256
3195 Influence of Major Axis on the Aerodynamic Characteristics of Elliptical Section

Authors: K. B. Rajasekarababu, J. Karthik, G. Vinayagamurthy

Abstract:

This paper is intended to explain the influence of major axis on aerodynamic characteristics of elliptical section. Many engineering applications such as off shore structures, bridge piers, civil structures and pipelines can be modelled as a circular cylinder but flow over complex bodies like, submarines, Elliptical wing, fuselage, missiles, and rotor blades, in which the parameters such as axis ratio can influence the flow characteristics of the wake and nature of separation. Influence of Major axis in Flow characteristics of elliptical sections are examined both experimentally and computationally in this study. For this research, four elliptical models with varying major axis [*AR=1, 4, 6, 10] are analysed. Experimental works have been conducted in a subsonic wind tunnel. Furthermore, flow characteristics on elliptical model are predicted from k-ε turbulence model using the commercial CFD packages by pressure based transient solver with Standard wall conditions.The analysis can be extended to estimation and comparison of Drag coefficient and Fatigue analysis of elliptical sections.

Keywords: elliptical section, major axis, aerodynamic characteristics, k-ε turbulence model

Procedia PDF Downloads 413
3194 Prediction of Rolling Forces and Real Exit Thickness of Strips in the Cold Rolling by Using Artificial Neural Networks

Authors: M. Heydari Vini

Abstract:

There is a complicated relation between effective input parameters of cold rolling and output rolling force and exit thickness of strips.in many mathematical models, the effect of some rolling parameters have been ignored and the outputs have not a desirable accuracy. In the other hand, there is a special relation among input thickness of strips,the width of the strips,rolling speeds,mandrill tensions and the required exit thickness of strips with rolling force and the real exit thickness of the rolled strip. First of all, in this paper the effective parameters of cold rolling process modeled using an artificial neural network according to the optimum network achieved by using a written program in MATLAB,it has been shown that the prediction of rolling stand parameters with different properties and new dimensions attained from prior rolled strips by an artificial neural network is applicable.

Keywords: cold rolling, artificial neural networks, rolling force, real rolled thickness of strips

Procedia PDF Downloads 483
3193 An ALM Matrix Completion Algorithm for Recovering Weather Monitoring Data

Authors: Yuqing Chen, Ying Xu, Renfa Li

Abstract:

The development of matrix completion theory provides new approaches for data gathering in Wireless Sensor Networks (WSN). The existing matrix completion algorithms for WSN mainly consider how to reduce the sampling number without considering the real-time performance when recovering the data matrix. In order to guarantee the recovery accuracy and reduce the recovery time consumed simultaneously, we propose a new ALM algorithm to recover the weather monitoring data. A lot of experiments have been carried out to investigate the performance of the proposed ALM algorithm by using different parameter settings, different sampling rates and sampling models. In addition, we compare the proposed ALM algorithm with some existing algorithms in the literature. Experimental results show that the ALM algorithm can obtain better overall recovery accuracy with less computing time, which demonstrate that the ALM algorithm is an effective and efficient approach for recovering the real world weather monitoring data in WSN.

Keywords: wireless sensor network, matrix completion, singular value thresholding, augmented Lagrange multiplier

Procedia PDF Downloads 363
3192 Kinetic Studies of Bioethanol Production from Salt-Pretreated Sugarcane Leaves

Authors: Preshanthan Moodley, E. B. Gueguim Kana

Abstract:

This study examines the kinetics of S. cerevisiae BY4743 growth and bioethanol production from sugarcane leaf waste (SLW), utilizing two different optimized pretreatment regimes; under two fermentation modes: steam salt-alkali filtered enzymatic hydrolysate (SSA-F), steam salt-alkali unfiltered (SSA-U), microwave salt-alkali filtered (MSA-F) and microwave salt-alkali unfiltered (MSA-U). The kinetic coefficients were determined by fitting the Monod, modified Gompertz, and logistic models to the experimental data with high coefficients of determination R² > 0.97. A maximum specific growth rate (µₘₐₓ) of 0.153 h⁻¹ was obtained under SSA-F and SSA-U whereas, 0.150 h⁻¹ was observed with MSA-F and MSA-U. SSA-U gave a potential maximum bioethanol concentration (Pₘ) of 31.06 g/L compared to 30.49, 23.26 and 21.79g/L for SSA-F, MSA-F and MSA-U respectively. An insignificant difference was observed in the μmax and Pm for the filtered and unfiltered enzymatic hydrolysate for both SSA and MSA pretreatments, thus potentially reducing a unit operation. These findings provide significant insights for process scale up.

Keywords: lignocellulosic bioethanol, microwave pretreatment, sugarcane leaves, kinetics

Procedia PDF Downloads 103
3191 Civil-Military Relations in Turkey, Europe, and Middle East

Authors: Dorsa Bakhshandehgeyazdi

Abstract:

This article tries to comprehend the change of Turkish common military relations in an analogical viewpoint. The investigation is taking into account two criteria: institutional / legitimate systems and political oversight of the military's self-sufficiency. Examination of European furthermore, Middle Eastern common military relations models to the Turkish ideal model discloses grave contrasts in the middle of Turkish and Middle Eastern common military relations. The Turkish model in change for not less than 10 years is closer to the European show in both lawful and political perspectives. However, the article underscores that Turkish common military relations are still in change and despite the fact that the EU increase procedure has continuously democratized the legitimate arrangement of the nation, law based combining obliges further advances in the political area. A the result, stabilization in Turkey depends not just on withdrawing of the military from the political domain, additionally on the best possible civilization of the administration in hypothesis and practice.

Keywords: Turkish common military, institutional, legitimate systems, political oversight, middle Eastern common military

Procedia PDF Downloads 448
3190 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 287
3189 A 15 Minute-Based Approach for Berth Allocation and Quay Crane Assignment

Authors: Hoi-Lam Ma, Sai-Ho Chung

Abstract:

In traditional integrated berth allocation with quay crane assignment models, time dimension is usually assumed in hourly based. However, nowadays, transshipment becomes the main business to many container terminals, especially in Southeast Asia (e.g. Hong Kong and Singapore). In these terminals, vessel arrivals are usually very frequent with small handling volume and very short staying time. Therefore, the traditional hourly-based modeling approach may cause significant berth and quay crane idling, and consequently cannot meet their practical needs. In this connection, a 15-minute-based modeling approach is requested by industrial practitioners. Accordingly, a Three-level Genetic Algorithm (3LGA) with Quay Crane (QC) shifting heuristics is designed to fulfill the research gap. The objective function here is to minimize the total service time. Preliminary numerical results show that the proposed 15-minute-based approach can reduce the berth and QC idling significantly.

Keywords: transshipment, integrated berth allocation, variable-in-time quay crane assignment, quay crane assignment

Procedia PDF Downloads 154
3188 Hominin Niche in the Times of Climate Change

Authors: Emilia Hunt, Sally C. Reynolds, Fiona Coward, Fabio Parracho Silva, Philip Hopley

Abstract:

Ecological niche modeling is widely used in conservation studies, but application to the extinct hominin species is a relatively new approach. Being able to understand what ecological niches were occupied by respective hominin species provides a new perspective into influences on evolutionary processes. Niche separation or overlap can tell us more about specific requirements of the species within the given timeframe. Many of the ancestral species lived through enormous climate changes: glacial and interglacial periods, changes in rainfall, leading to desertification or flooding of regions and displayed impressive levels of adaptation necessary for their survival. This paper reviews niche modeling methodologies and their application to hominin studies. Traditional conservation methods might not be directly applicable to extinct species and are not comparable to hominins. Hominin niche also includes aspects of technologies, use of fire and extended communication, which are not traditionally used in building conservation models. Future perspectives on how to improve niche modeling for extinct hominin species will be discussed.

Keywords: hominin niche, climate change, evolution, adaptation, ecological niche modelling

Procedia PDF Downloads 172
3187 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs

Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.

Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour

Procedia PDF Downloads 70
3186 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 158
3185 Shock Formation for Double Ramp Surface

Authors: Abdul Wajid Ali

Abstract:

Supersonic flight promises speed, but the design of the air inlet faces an obstacle: shock waves. They prevent air flow in the mixed compression ports, which reduces engine performance. Our research investigates this using supersonic wind tunnels and schlieren imaging to reveal the complex dance between shock waves and airflow. The findings show clear patterns of shock wave formation influenced by internal/external pressure surfaces. We looked at the boundary layer, the slow-moving air near the inlet walls, and its interaction with shock waves. In addition, the study emphasizes the dependence of the shock wave behaviour on the Mach number, which highlights the need for adaptive models. This knowledge is key to optimizing the combined compression inputs, paving the way for more powerful and efficient supersonic vehicles. Future engineers can use this knowledge to improve existing designs and explore innovative configurations for next-generation ultrasonic applications.

Keywords: oblique shock formation, boundary layer interaction, schlieren images, double wedge surface

Procedia PDF Downloads 37
3184 Time Series Analysis of Radon Concentration at Different Depths in an Underground Goldmine

Authors: Theophilus Adjirackor, Frederic Sam, Irene Opoku-Ntim, David Okoh Kpeglo, Prince K. Gyekye, Frank K. Quashie, Kofi Ofori

Abstract:

Indoor radon concentrations were collected monthly over a period of one year in 10 different levels in an underground goldmine, and the data was analyzed using a four-moving average time series to determine the relationship between the depths of the underground mine and the indoor radon concentration. The detectors were installed in batches within four quarters. The measurements were carried out using LR115 solid-state nuclear track detectors. Statistical models are applied in the prediction and analysis of the radon concentration at various depths. The time series model predicted a positive relationship between the depth of the underground mine and the indoor radon concentration. Thus, elevated radon concentrations are expected at deeper levels of the underground mine, but the relationship was insignificant at the 5% level of significance with a negative adjusted R2 (R2 = – 0.021) due to an appropriate engineering and adequate ventilation rate in the underground mine.

Keywords: LR115, radon concentration, rime series, underground goldmine

Procedia PDF Downloads 20
3183 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas

Authors: A. Odoom, A. Salama, H. Ibrahim

Abstract:

Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.

Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model

Procedia PDF Downloads 123
3182 Numerical Methods for Topological Optimization of Wooden Structural Elements

Authors: Daniela Tapusi, Adrian Andronic, Naomi Tufan, Ruxandra Erbașu, Ioana Teodorescu

Abstract:

The proposed theme of this article falls within the policy of reducing carbon emissions imposed by the ‘Green New Deal’ by replacing structural elements made of energy-intensive materials with ecological materials. In this sense, wood has many qualities (high strength/mass and stiffness/mass ratio, low specific gravity, recovery/recycling) that make it competitive with classic building materials. The topological optimization of the linear glulam elements, resulting from different types of analysis (Finite Element Method, simple regression on metamodels), tests on models or by Monte-Carlo simulation, leads to a material reduction of more than 10%. This article proposes a method of obtaining topologically optimized shapes for different types of glued laminated timber beams. The results obtained will constitute the database for AI training.

Keywords: timber, glued laminated timber, artificial-intelligence, environment, carbon emissions

Procedia PDF Downloads 3
3181 Determinant Elements for Useful Life in Airports

Authors: Marcelo Müller Beuren, José Luis Duarte Ribeiro

Abstract:

Studies point that Brazilian large airports are not managing their assets efficiently. Therefore, organizations seek improvements to raise their asset’s productivity. Hence, identification of assets useful life in airports becomes an important subject, since its accuracy leads to better maintenance plans and technological substitution, contribution to airport services management. However, current useful life prediction models do not converge in terms of determinant elements used, as they are particular to the studied situation. For that reason, the main objective of this paper is to identify the determinant elements for a useful life of major assets in airports. With that purpose, a case study was held in the key airport of the south of Brazil trough historical data analysis and specialist interview. This paper concluded that most of the assets useful life are determined by technical elements, maintenance cost, and operational costs, while few presented influence of technological obsolescence. As a highlight, it was possible to identify the determinant elements to be considered by a model which objective is to identify the useful life of airport’s major assets.

Keywords: airports, asset management, asset useful life

Procedia PDF Downloads 502
3180 Probabilistic and Stochastic Analysis of a Retaining Wall for C-Φ Soil Backfill

Authors: André Luís Brasil Cavalcante, Juan Felix Rodriguez Rebolledo, Lucas Parreira de Faria Borges

Abstract:

A methodology for the probabilistic analysis of active earth pressure on retaining wall for c-Φ soil backfill is described in this paper. The Rosenblueth point estimate method is used to measure the failure probability of a gravity retaining wall. The basic principle of this methodology is to use two point estimates, i.e., the standard deviation and the mean value, to examine a variable in the safety analysis. The simplicity of this framework assures to its wide application. For the calculation is required 2ⁿ repetitions during the analysis, since the system is governed by n variables. In this study, a probabilistic model based on the Rosenblueth approach for the computation of the overturning probability of failure of a retaining wall is presented. The obtained results have shown the advantages of this kind of models in comparison with the deterministic solution. In a relatively easy way, the uncertainty on the wall and fill parameters are taken into account, and some practical results can be obtained for the retaining structure design.

Keywords: retaining wall, active earth pressure, backfill, probabilistic analysis

Procedia PDF Downloads 395
3179 Detecting Cyberbullying, Spam and Bot Behavior and Fake News in Social Media Accounts Using Machine Learning

Authors: M. D. D. Chathurangi, M. G. K. Nayanathara, K. M. H. M. M. Gunapala, G. M. R. G. Dayananda, Kavinga Yapa Abeywardena, Deemantha Siriwardana

Abstract:

Due to the growing popularity of social media platforms at present, there are various concerns, mostly cyberbullying, spam, bot accounts, and the spread of incorrect information. To develop a risk score calculation system as a thorough method for deciphering and exposing unethical social media profiles, this research explores the most suitable algorithms to our best knowledge in detecting the mentioned concerns. Various multiple models, such as Naïve Bayes, CNN, KNN, Stochastic Gradient Descent, Gradient Boosting Classifier, etc., were examined, and the best results were taken into the development of the risk score system. For cyberbullying, the Logistic Regression algorithm achieved an accuracy of 84.9%, while the spam-detecting MLP model gained 98.02% accuracy. The bot accounts identifying the Random Forest algorithm obtained 91.06% accuracy, and 84% accuracy was acquired for fake news detection using SVM.

Keywords: cyberbullying, spam behavior, bot accounts, fake news, machine learning

Procedia PDF Downloads 16
3178 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest

Authors: Bharatendra Rai

Abstract:

Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).

Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error

Procedia PDF Downloads 301
3177 Study of ANFIS and ARIMA Model for Weather Forecasting

Authors: Bandreddy Anand Babu, Srinivasa Rao Mandadi, C. Pradeep Reddy, N. Ramesh Babu

Abstract:

In this paper quickly illustrate the correlation investigation of Auto-Regressive Integrated Moving and Average (ARIMA) and daptive Network Based Fuzzy Inference System (ANFIS) models done by climate estimating. The climate determining is taken from University of Waterloo. The information is taken as Relative Humidity, Ambient Air Temperature, Barometric Pressure and Wind Direction utilized within this paper. The paper is carried out by analyzing the exhibitions are seen by demonstrating of ARIMA and ANIFIS model like with Sum of average of errors. Versatile Network Based Fuzzy Inference System (ANFIS) demonstrating is carried out by Mat lab programming and Auto-Regressive Integrated Moving and Average (ARIMA) displaying is produced by utilizing XLSTAT programming. ANFIS is carried out in Fuzzy Logic Toolbox in Mat Lab programming.

Keywords: ARIMA, ANFIS, fuzzy surmising tool stash, weather forecasting, MATLAB

Procedia PDF Downloads 396
3176 Inferring Influenza Epidemics in the Presence of Stratified Immunity

Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley

Abstract:

Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.

Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity

Procedia PDF Downloads 242
3175 Analysis of Wall Deformation of the Arterial Plaque Models: Effects of Viscoelasticity

Authors: Eun Kyung Kim, Kyehan Rhee

Abstract:

Viscoelastic wall properties of the arterial plaques change as the disease progresses, and estimation of wall viscoelasticity can provide a valuable assessment tool for plaque rupture prediction. Cross section of the stenotic coronary artery was modeled based on the IVUS image, and the finite element analysis was performed to get wall deformation under pulsatile pressure. The effects of viscoelastic parameters of the plaque on luminal diameter variations were explored. The result showed that decrease of viscous effect reduced the phase angle between the pressure and displacement waveforms, and phase angle was dependent on the viscoelastic properties of the wall. Because viscous effect of tissue components could be identified using the phase angle difference, wall deformation waveform analysis may be applied to predict plaque wall composition change and vascular wall disease progression.

Keywords: atherosclerotic plaque, diameter variation, finite element method, viscoelasticity

Procedia PDF Downloads 205