Search results for: geographic feature distribution
6667 An Investigation on Electric Field Distribution around 380 kV Transmission Line for Various Pylon Models
Authors: C. F. Kumru, C. Kocatepe, O. Arikan
Abstract:
In this study, electric field distribution analyses for three pylon models are carried out by a Finite Element Method (FEM) based software. Analyses are performed in both stationary and time domains to observe instantaneous values along with the effective ones. Considering the results of the study, different line geometries is considerably affecting the magnitude and distribution of electric field although the line voltages are the same. Furthermore, it is observed that maximum values of instantaneous electric field obtained in time domain analysis are quite higher than the effective ones in stationary mode. In consequence, electric field distribution analyses should be individually made for each different line model and the limit exposure values or distances to residential buildings should be defined according to the results obtained.Keywords: electric field, energy transmission line, finite element method, pylon
Procedia PDF Downloads 7286666 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions
Authors: Chaitanya Varma, Arpan Mehar
Abstract:
The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.Keywords: highway, mixed traffic flow, modeling, operating speed
Procedia PDF Downloads 4606665 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index
Authors: Todd Zhou, Mikhail Yurochkin
Abstract:
Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index
Procedia PDF Downloads 1246664 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 1136663 Force Distribution and Muscles Activation for Ankle Instability Patients with Rigid and Kinesiotape while Standing
Authors: Norazlin Mohamad, Saiful Adli Bukry, Zarina Zahari, Haidzir Manaf, Hanafi Sawalludin
Abstract:
Background: Deficit in neuromuscular recruitment and decrease force distribution were the common problems among ankle instability patients due to altered joint kinematics that lead to recurrent ankle injuries. Rigid Tape and KT Tape had widely been used as therapeutic and performance enhancement tools in ankle stability. However the difference effect between this two tapes is still controversial. Objective: To investigate the different effect between Rigid Tape and KT Tape on force distribution and muscle activation among ankle instability patients while standing. Study design: Crossover trial. Participants: 27 patients, age between 18 to 30 years old participated in this study. All the subjects were applied with KT Tape & Rigid Tape on their affected ankle with 3 days of interval for each intervention. The subjects were tested with their barefoot (without tape) first to act as a baseline before proceeding with KT Tape, and then with Rigid Tape. Result: There were no significant difference on force distribution at forefoot and back-foot for both tapes while standing. However the mean data shows that Rigid Tape has the highest force distribution at back-foot rather than forefoot when compared with KT Tape that had more force distribution at forefoot while standing. Regarding muscle activation (Peroneus Longus), results showed significant difference between Rigid Tape and KT Tape (p= 0.048). However, there was no significant difference on Tibialis Anterior muscle activation between both tapes while standing. Conclusion: The results indicated that Peroneus longus muscle was more active when applied Rigid Tape rather than KT Tape in ankle instability patients while standing.Keywords: ankle instability, kinematic, muscle activation, force distribution, Rigid Tape, KT tape
Procedia PDF Downloads 4176662 Conservativeness of Probabilistic Constrained Optimal Control Method for Unknown Probability Distribution
Authors: Tomoaki Hashimoto
Abstract:
In recent decades, probabilistic constrained optimal control problems have attracted much attention in many research field. Although probabilistic constraints are generally intractable in an optimization problem, several tractable methods haven been proposed to handle probabilistic constraints. In most methods, probabilistic constraints are reduced to deterministic constraints that are tractable in an optimization problem. However, there is a gap between the transformed deterministic constraints in case of known and unknown probability distribution. This paper examines the conservativeness of probabilistic constrained optimization method with the unknown probability distribution. The objective of this paper is to provide a quantitative assessment of the conservatism for tractable constraints in probabilistic constrained optimization with the unknown probability distribution.Keywords: optimal control, stochastic systems, discrete time systems, probabilistic constraints
Procedia PDF Downloads 5806661 An Extended Inverse Pareto Distribution, with Applications
Authors: Abdel Hadi Ebraheim
Abstract:
This paper introduces a new extension of the Inverse Pareto distribution in the framework of Marshal-Olkin (1997) family of distributions. This model is capable of modeling various shapes of aging and failure data. The statistical properties of the new model are discussed. Several methods are used to estimate the parameters involved. Explicit expressions are derived for different types of moments of value in reliability analysis are obtained. Besides, the order statistics of samples from the new proposed model have been studied. Finally, the usefulness of the new model for modeling reliability data is illustrated using two real data sets with simulation study.Keywords: pareto distribution, marshal-Olkin, reliability, hazard functions, moments, estimation
Procedia PDF Downloads 826660 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1506659 Combination of Artificial Neural Network Model and Geographic Information System for Prediction Water Quality
Authors: Sirilak Areerachakul
Abstract:
Water quality has initiated serious management efforts in many countries. Artificial Neural Network (ANN) models are developed as forecasting tools in predicting water quality trend based on historical data. This study endeavors to automatically classify water quality. The water quality classes are evaluated using 6 factor indices. These factors are pH value (pH), Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Nitrate Nitrogen (NO3N), Ammonia Nitrogen (NH3N) and Total Coliform (T-Coliform). The methodology involves applying data mining techniques using multilayer perceptron (MLP) neural network models. The data consisted of 11 sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage Bangkok Metropolitan Administration during 2007-2011. The results of multilayer perceptron neural network exhibit a high accuracy multilayer perception rate at 94.23% in classifying the water quality of Saen Saep canal in Bangkok. Subsequently, this encouraging result could be combined with GIS data improves the classification accuracy significantly.Keywords: artificial neural network, geographic information system, water quality, computer science
Procedia PDF Downloads 3436658 Using Geographic Information System and Analytic Hierarchy Process for Detecting Forest Degradation in Benslimane Forest, Morocco
Authors: Loubna Khalile, Hicham Lahlaoi, Hassan Rhinane, A. Kaoukaya, S. Fal
Abstract:
Green spaces is an essential element, they contribute to improving the quality of lives of the towns around them. They are a place of relaxation, walk and rest a playground for sport and youths. According to United Nations Organization Forests cover 31% of the land. In Morocco in 2013 that cover 12.65 % of the total land area, still, a small proportion compared to the natural needs of forests as a green lung of our planet. The Benslimane Forest is a large green area It belongs to Chaouia-Ouardigha Region and Greater Casablanca Region, it is located geographically between Casablanca is considered the economic and business Capital of Morocco and Rabat the national political capital, with an area of 12261.80 Hectares. The essential problem usually encountered in suburban forests, is visitation and tourism pressure it is anthropogenic actions, as well as other ecological and environmental factors. In recent decades, Morocco has experienced a drought year that has influenced the forest with increasing human pressure and every day it suffers heavy losses, as well as over-exploitation. The Moroccan forest ecosystems are weak with intense ecological variation, domanial and imposed usage rights granted to the population; forests are experiencing a significant deterioration due to forgetfulness and immoderate use of forest resources which can influence the destruction of animal habitats, vegetation, water cycle and climate. The purpose of this study is to make a model of the degree of degradation of the forest and know the causes for prevention by using remote sensing and geographic information systems by introducing climate and ancillary data. Analytic hierarchy process was used to find out the degree of influence and the weight of each parameter, in this case, it is found that anthropogenic activities have a fairly significant impact has thus influenced the climate.Keywords: analytic hierarchy process, degradation, forest, geographic information system
Procedia PDF Downloads 3256657 A Comparative Study of Generalized Autoregressive Conditional Heteroskedasticity (GARCH) and Extreme Value Theory (EVT) Model in Modeling Value-at-Risk (VaR)
Authors: Longqing Li
Abstract:
The paper addresses the inefficiency of the classical model in measuring the Value-at-Risk (VaR) using a normal distribution or a Student’s t distribution. Specifically, the paper focuses on the one day ahead Value-at-Risk (VaR) of major stock market’s daily returns in US, UK, China and Hong Kong in the most recent ten years under 95% confidence level. To improve the predictable power and search for the best performing model, the paper proposes using two leading alternatives, Extreme Value Theory (EVT) and a family of GARCH models, and compares the relative performance. The main contribution could be summarized in two aspects. First, the paper extends the GARCH family model by incorporating EGARCH and TGARCH to shed light on the difference between each in estimating one day ahead Value-at-Risk (VaR). Second, to account for the non-normality in the distribution of financial markets, the paper applies Generalized Error Distribution (GED), instead of the normal distribution, to govern the innovation term. A dynamic back-testing procedure is employed to assess the performance of each model, a family of GARCH and the conditional EVT. The conclusion is that Exponential GARCH yields the best estimate in out-of-sample one day ahead Value-at-Risk (VaR) forecasting. Moreover, the discrepancy of performance between the GARCH and the conditional EVT is indistinguishable.Keywords: Value-at-Risk, Extreme Value Theory, conditional EVT, backtesting
Procedia PDF Downloads 3216656 Risk Indicators of Massive Removal Phenomena According to the Mora - Vahrson Method, Applied in Pitalito and Campoalegre Municipalities
Authors: Laura Fernanda Pedreros Araque, Sebastian Rivera Pardo
Abstract:
The massive removal phenomena have been one of the most frequent natural disasters in the world, causing thousands of deaths, victims, damage to homes and diseases. In Pitalito, and Campoalegre department of Huila municipalities - Colombia, disasters have occurred due to various events such as high rainfall, earthquakes; it has caused landslides, floods, among others, affected the economy, the community, and transportation. For this reason, a study was carried out on the area’s most prone to suffer these phenomena to take preventive measures in favor of the protection of the population, the resources of management, and the planning of civil works. For the proposed object, the Mora-Varshon method was used, which allows classifying the degree of susceptibility to landslides in which the areas are found. Also, various factors or parameters were evaluated such as the soil moisture, lithology, slope, seismicity, and rain, each of these indicators were obtained using information from IDEAM, Servicio Geologico Colombiano (SGC) and using geographic information for geoprocessing in the Arcgis software to realize a mapping to indicate the susceptibility to landslides, classifying the areas of the municipalities such as very low, low, medium, moderate, high or very high.Keywords: geographic information system, landslide, mass removal phenomena, Mora-Varshon method
Procedia PDF Downloads 1426655 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling
Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé
Abstract:
Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation
Procedia PDF Downloads 806654 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions
Authors: Ramin Rostamkhani, Thurasamy Ramayah
Abstract:
One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components
Procedia PDF Downloads 876653 Collaborative Energy Optimization for Multi-Microgrid Distribution System Based on Two-Stage Game Approach
Authors: Hanmei Peng, Yiqun Wang, Mao Tan, Zhuocen Dai, Yongxin Su
Abstract:
Efficient energy management in multi-microgrid distribution systems holds significant importance for enhancing the economic benefits of regional power grids. To better balance conflicts among various stakeholders, a two-stage game-based collaborative optimization approach is proposed in this paper, effectively addressing the realistic scenario involving both competition and collaboration among stakeholders. The first stage, aimed at maximizing individual benefits, involves constructing a non-cooperative tariff game model for the distribution network and surplus microgrid. In the second stage, considering power flow and physical line capacity constraints we establish a cooperative P2P game model for the multi-microgrid distribution system, and the optimization involves employing the Lagrange method of multipliers to handle complex constraints. Simulation results demonstrate that the proposed approach can effectively improve the system economics while harmonizing individual and collective rationality.Keywords: cooperative game, collaborative optimization, multi-microgrid distribution system, non-cooperative game
Procedia PDF Downloads 706652 Bayesian Estimation under Different Loss Functions Using Gamma Prior for the Case of Exponential Distribution
Authors: Md. Rashidul Hasan, Atikur Rahman Baizid
Abstract:
The Bayesian estimation approach is a non-classical estimation technique in statistical inference and is very useful in real world situation. The aim of this paper is to study the Bayes estimators of the parameter of exponential distribution under different loss functions and then compared among them as well as with the classical estimator named maximum likelihood estimator (MLE). In our real life, we always try to minimize the loss and we also want to gather some prior information (distribution) about the problem to solve it accurately. Here the gamma prior is used as the prior distribution of exponential distribution for finding the Bayes estimator. In our study, we also used different symmetric and asymmetric loss functions such as squared error loss function, quadratic loss function, modified linear exponential (MLINEX) loss function and non-linear exponential (NLINEX) loss function. Finally, mean square error (MSE) of the estimators are obtained and then presented graphically.Keywords: Bayes estimator, maximum likelihood estimator (MLE), modified linear exponential (MLINEX) loss function, Squared Error (SE) loss function, non-linear exponential (NLINEX) loss function
Procedia PDF Downloads 3836651 Impact of the Photovoltaic Integration in Power Distribution Network: Case Study in Badak Liquefied Natural Gas (LNG)
Authors: David Hasurungan
Abstract:
This paper objective is to analyze the impact from photovoltaic system integration to power distribution network. The case study in Badak Liquefied Natural Gas (LNG) plant is presented in this paper. Badak LNG electricity network is operated in islanded mode. The total power generation in Badak LNG plant is significantly affected to feed gas supply. Meanwhile, to support the Government regulation, Badak LNG continuously implemented the grid-connected photovoltaic system in existing power distribution network. The impact between train operational mode change in Badak LNG plant and the growth of photovoltaic system is also encompassed in analysis. The analysis and calculation are performed using software Power Factory 15.1.Keywords: power quality, distribution network, grid-connected photovoltaic system, power management system
Procedia PDF Downloads 3606650 Optimization Analysis of Controlled Cooling Process for H-Shape Steam Beams
Authors: Jiin-Yuh Jang, Yu-Feng Gan
Abstract:
In order to improve the comprehensive mechanical properties of the steel, the cooling rate, and the temperature distribution must be controlled in the cooling process. A three-dimensional numerical model for the prediction of the heat transfer coefficient distribution of H-beam in the controlled cooling process was performed in order to obtain the uniform temperature distribution and minimize the maximum stress and the maximum deformation after the controlled cooling. An algorithm developed with a simplified conjugated-gradient method was used as an optimizer to optimize the heat transfer coefficient distribution. The numerical results showed that, for the case of air cooling 5 seconds followed by water cooling 6 seconds with uniform the heat transfer coefficient, the cooling rate is 15.5 (℃/s), the maximum temperature difference is 85℃, the maximum the stress is 125 MPa, and the maximum deformation is 1.280 mm. After optimize the heat transfer coefficient distribution in control cooling process with the same cooling time, the cooling rate is increased to 20.5 (℃/s), the maximum temperature difference is decreased to 52℃, the maximum stress is decreased to 82MPa and the maximum deformation is decreased to 1.167mm.Keywords: controlled cooling, H-Beam, optimization, thermal stress
Procedia PDF Downloads 3706649 Spatial Pattern and Predictors of Malaria in Ethiopia: Application of Auto Logistics Spatial Regression
Authors: Melkamu A. Zeru, Yamral M. Warkaw, Aweke A. Mitku, Muluwerk Ayele
Abstract:
Introduction: Malaria is a severe health threat in the World, mainly in Africa. It is the major cause of health problems in which the risk of morbidity and mortality associated with malaria cases are characterized by spatial variations across the county. This study aimed to investigate the spatial patterns and predictors of malaria distribution in Ethiopia. Methods: A weighted sample of 15,239 individuals with rapid diagnosis tests was obtained from the Central Statistical Agency and Ethiopia malaria indicator survey of 2015. Global Moran's I and Moran scatter plots were used in determining the distribution of malaria cases, whereas the local Moran's I statistic was used in identifying exposed areas. In data manipulation, machine learning was used for variable reduction and statistical software R, Stata, and Python were used for data management and analysis. The auto logistics spatial binary regression model was used to investigate the predictors of malaria. Results: The final auto logistics regression model reported that male clients had a positive significant effect on malaria cases as compared to female clients [AOR=2.401, 95 % CI: (2.125 - 2.713)]. The distribution of malaria across the regions was different. The highest incidence of malaria was found in Gambela [AOR=52.55, 95%CI: (40.54-68.12)] followed by Beneshangul [AOR=34.95, 95%CI: (27.159 - 44.963)]. Similarly, individuals in Amhara [AOR=0.243, 95% CI:(0.1950.303],Oromiya[AOR=0.197,95%CI:(0.1580.244)],DireDawa[AOR=0.064,95%CI(0.049-0.082)],AddisAbaba[AOR=0.057,95%CI:(0.044-0.075)], Somali[AOR=0.077,95%CI:(0.059-0.097)], SNNPR[OR=0.329, 95%CI: (0.261- 0.413)] and Harari [AOR=0.256, 95%CI:(0.201 - 0.325)] were less likely to had low incidence of malaria as compared with Tigray. Furthermore, for a one-meter increase in altitude, the odds of a positive rapid diagnostic test (RDT) decrease by 1.6% [AOR = 0.984, 95% CI :( 0.984 - 0.984)]. The use of a shared toilet facility was found as a protective factor for malaria in Ethiopia [AOR=1.671, 95% CI: (1.504 - 1.854)]. The spatial autocorrelation variable changes the constant from AOR = 0.471 for logistic regression to AOR = 0.164 for auto logistics regression. Conclusions: This study found that the incidence of malaria in Ethiopia had a spatial pattern that is associated with socio-economic, demographic, and geographic risk factors. Spatial clustering of malaria cases had occurred in all regions, and the risk of clustering was different across the regions. The risk of malaria was found to be higher for those who live in soil floor-type houses as compared to those who live in cement or ceramics floor type. Similarly, households with thatched, metal and thin, and other roof-type houses have a higher risk of malaria than ceramic tiles roof houses. Moreover, using a protected anti-mosquito net reduced the risk of malaria incidence.Keywords: malaria, Ethiopia, auto logistics, spatial model, spatial clustering
Procedia PDF Downloads 346648 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 5356647 The Modification of Convolutional Neural Network in Fin Whale Identification
Authors: Jiahao Cui
Abstract:
In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity.Keywords: convolutional neural network, ResNet, AlexNet, fin whale preservation, feature extraction
Procedia PDF Downloads 1226646 A Heuristic for the Integrated Production and Distribution Scheduling Problem
Authors: Christian Meinecke, Bernd Scholz-Reiter
Abstract:
The integrated problem of production and distribution scheduling is relevant in many industrial applications. Thus, many heuristics to solve this integrated problem have been developed in the last decade. Most of these heuristics use a sequential working principal or a single decomposition and integration approach to separate and solve sub-problems. A heuristic using a multi-step decomposition and integration approach is presented in this paper and evaluated in a case study. The result show significant improved results compared with sequential scheduling heuristics.Keywords: production and outbound distribution, integrated planning, heuristic, decomposition, integration
Procedia PDF Downloads 4296645 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization
Authors: Hironori Karachi, Haruka Yamashita
Abstract:
Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.Keywords: data science, non-negative matrix factorization, missing data, quality of services
Procedia PDF Downloads 1316644 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.Keywords: cancer classification, feature selection, deep learning, genetic algorithm
Procedia PDF Downloads 1116643 Electrical Tortuosity across Electrokinetically Remediated Soils
Authors: Waddah S. Abdullah, Khaled F. Al-Omari
Abstract:
Electrokinetic remediation is one of the most influential and effective methods to decontaminate contaminated soils. Electroosmosis and electromigration are the processes of electrochemical extraction of contaminants from soils. The driving force that causes removing contaminants from soils (electroosmosis process or electromigration process) is voltage gradient. Therefore, the electric field distribution throughout the soil domain is extremely important to investigate and to determine the factors that help to establish a uniform electric field distribution in order to make the clean-up process work properly and efficiently. In this study, small-sized passive electrodes (made of graphite) were placed at predetermined locations within the soil specimen, and the voltage drop between these passive electrodes was measured in order to observe the electrical distribution throughout the tested soil specimens. The electrokinetic test was conducted on two types of soils; a sandy soil and a clayey soil. The electrical distribution throughout the soil domain was conducted with different tests properties; and the electrical field distribution was observed in three-dimensional pattern in order to establish the electrical distribution within the soil domain. The effects of density, applied voltages, and degree of saturation on the electrical distribution within the remediated soil were investigated. The distribution of the moisture content, concentration of the sodium ions, and the concentration of the calcium ions were determined and established in three-dimensional scheme. The study has shown that the electrical conductivity within soil domain depends on the moisture content and concentration of electrolytes present in the pore fluid. The distribution of the electrical field in the saturated soil was found not be affected by its density. The study has also shown that high voltage gradient leads to non-uniform electric field distribution within the electroremediated soil. Very importantly, it was found that even when the electric field distribution is uniform globally (i.e. between the passive electrodes), local non-uniformity could be established within the remediated soil mass. Cracks or air gaps formed due to temperature rise (because of electric flow in low conductivity regions) promotes electrical tortuosity. Thus, fracturing or cracking formed in the remediated soil mass causes disconnection of electric current and hence, no removal of contaminant occur within these areas.Keywords: contaminant removal, electrical tortuousity, electromigration, electroosmosis, voltage distribution
Procedia PDF Downloads 4196642 Alexa (Machine Learning) in Artificial Intelligence
Authors: Loulwah Bokhari, Jori Nazer, Hala Sultan
Abstract:
Nowadays, artificial intelligence (AI) is used as a foundation for many activities in modern computing applications at home, in vehicles, and in businesses. Many modern machines are built to carry out a specific activity or purpose. This is where the Amazon Alexa application comes in, as it is used as a virtual assistant. The purpose of this paper is to explore the use of Amazon Alexa among people and how it has improved and made simple daily tasks easier for many people. We gave our participants several questions regarding Amazon Alexa and if they had recently used or heard of it, as well as the different tasks it provides and whether it successfully satisfied their needs. Overall, we found that participants who have recently used Alexa have found it to be helpful in their daily tasks.Keywords: artificial intelligence, Echo system, machine learning, feature for feature match
Procedia PDF Downloads 1216641 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions
Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani
Abstract:
The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.Keywords: estimation, bandwidth, mean square error, cumulative distribution function
Procedia PDF Downloads 5806640 A Model for Analysis the Induced Voltage of 115 kV On-Line Acting on Neighboring 22 kV Off-Line
Authors: Sakhon Woothipatanapan, Surasit Prakobkit
Abstract:
This paper presents a model for analysis the induced voltage of transmission lines (energized) acting on neighboring distribution lines (de-energized). From environmental restrictions, 22 kV distribution lines need to be installed under 115 kV transmission lines. With the installation of the two parallel circuits like this, they make the induced voltage which can cause harm to operators. This work was performed with the ATP-EMTP modeling to analyze such phenomenon before field testing. Simulation results are used to find solutions to prevent danger to operators who are on the pole.Keywords: transmission system, distribution system, induced voltage, off-line operation
Procedia PDF Downloads 6066639 Closest Possible Neighbor of a Different Class: Explaining a Model Using a Neighbor Migrating Generator
Authors: Hassan Eshkiki, Benjamin Mora
Abstract:
The Neighbor Migrating Generator is a simple and efficient approach to finding the closest potential neighbor(s) with a different label for a given instance and so without the need to calibrate any kernel settings at all. This allows determining and explaining the most important features that will influence an AI model. It can be used to either migrate a specific sample to the class decision boundary of the original model within a close neighborhood of that sample or identify global features that can help localising neighbor classes. The proposed technique works by minimizing a loss function that is divided into two components which are independently weighted according to three parameters α, β, and ω, α being self-adjusting. Results show that this approach is superior to past techniques when detecting the smallest changes in the feature space and may also point out issues in models like over-fitting.Keywords: explainable AI, EX AI, feature importance, counterfactual explanations
Procedia PDF Downloads 1906638 Prevalence of Obesity in Kuwait: A Case Study among Kuwait University Students
Authors: Mohammad Alnasrallah, Muhammad Almatar
Abstract:
This study seeks to understand the relationship between the effect of geography and obesity prevalence among Kuwait University students. The sample involved 735 participants, 231 male, and 504 females, where there is a high percentage of them are overweight and obese. The percentage of overweight is 21% (BMI >25 - 30) while the percentage of obesity is 13.7% (BMI > 30). Both overweight and obese people account for 34.7%. In the study area, there are 327 fast food restaurants located in different places of in the urban area. This study uses the Geographic Information System to analyze the distribution of obesity and fast food restaurants. The study found that within half kilometers of fast food outlets, there are 33% of normal weight (BMI < 25), 30% of overweight while for the obese people there are 43 %, which shows that obesity is linked to the location of fast food restaurants. One of the significant tools that were used in this study hot and cold spots. The study found that areas of hot spots of fast food restaurants tend to be located in areas of hot spots of obese people. In conclusion, studying the prevalence of obesity from geographical perspective help to understand this public health issue and its relation to the effect of geography.Keywords: obesity prevalence, GIS, fast food, Kuwait
Procedia PDF Downloads 214