Search results for: modified linear exponential (MLINEX) loss function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12773

Search results for: modified linear exponential (MLINEX) loss function

12653 A Particle Image Velocimetric (PIV) Experiment on Simplified Bottom Hole Flow Field

Authors: Heqian Zhao, Huaizhong Shi, Zhongwei Huang, Zhengliang Chen, Ziang Gu, Fei Gao

Abstract:

Hydraulics mechanics is significantly important in the drilling process of oil or gas exploration, especially for the drill bit. The fluid flows through the nozzles on the bit and generates a water jet to remove the cutting at the bottom hole. In this paper, a simplified bottom hole model is established. The Particle Image Velocimetric (PIV) is used to capture the flow field of the single nozzle. Due to the limitation of the bottom and wellbore, the potential core is shorter than that of the free water jet. The velocity magnitude rapidly attenuates when fluid close to the bottom is lower than about 5 mm. Besides, a vortex zone appears near the middle of the bottom beside the water jet zone. A modified exponential function can be used to fit the centerline velocity well. On the one hand, the results of this paper can provide verification for the numerical simulation of the bottom hole flow field. On the other hand, it also can provide an experimental basis for the hydraulic design of the drill bit.

Keywords: oil and gas, hydraulic mechanic of drilling, PIV, bottom hole

Procedia PDF Downloads 182
12652 Design and Simulation of Variable Air Volume Air Conditioning System Based on Improved Sliding Mode Control

Authors: Abbas Anser, Ahmad Irfan

Abstract:

The main purpose of the VAV (Variable Air Volume) in Heating, Ventilation, and Air Conditioning (HVAC) system is to reduce energy consumption and make the buildings comfortable for the occupants. For better performance of the air conditioning system, different control techniques have been developed. In this paper, an Improved Sliding Mode Control (ISMC), based on Power Rate Exponential Reaching Law (PRERL), has been implemented on a VAV air conditioning system. Through the proposed technique, fast response and robustness have been achieved. To verify the efficacy of ISMC, a comparison of the suggested control technique has been made with Exponential Reaching Law (ERL) based SMC. And secondly, chattering, which is unfavorable as it deteriorates the mechanical parts of the air conditioning system by the continuous movement of the mechanical parts and consequently it increases the energy loss in the air conditioning system, has been alleviated. MATLAB/SIMULINK results show the effectiveness of the utilized scheme, which ensures the enhancement of the energy efficiency of the VAV air conditioning system.

Keywords: PID, SMC, HVAC, PRERL, feedback linearization, VAV, chattering

Procedia PDF Downloads 95
12651 Mathematical Modeling of Carotenoids and Polyphenols Content of Faba Beans (Vicia faba L.) during Microwave Treatments

Authors: Ridha Fethi Mechlouch, Ahlem Ayadi, Ammar Ben Brahim

Abstract:

Given the importance of the preservation of polyphenols and carotenoids during thermal processing, we attempted in this study to investigate the variation of these two parameters in faba beans during microwave treatment using different power densities (1; 2; and 3W/g), then to perform a mathematical modeling by using non-linear regression analysis to evaluate the models constants. The variation of the carotenoids and polyphenols ratio of faba beans and the models are tested to validate the experimental results. Exponential models were found to be suitable to describe the variation of caratenoid ratio (R²= 0.945, 0.927 and 0.946) for power densities (1; 2; and 3W/g) respectively, and polyphenol ratio (R²= 0.931, 0.989 and 0.982) for power densities (1; 2; and 3W/g) respectively. The effect of microwave power density Pd(W/g) on the coefficient k of models were also investigated. The coefficient is highly correlated (R² = 1) and can be expressed as a polynomial function.

Keywords: microwave treatment, power density, carotenoid, polyphenol, modeling

Procedia PDF Downloads 223
12650 Computational Aerodynamic Shape Optimisation Using a Concept of Control Nodes and Modified Cuckoo Search

Authors: D. S. Naumann, B. J. Evans, O. Hassan

Abstract:

This paper outlines the development of an automated aerodynamic optimisation algorithm using a novel method of parameterising a computational mesh by employing user–defined control nodes. The shape boundary movement is coupled to the movement of the novel concept of the control nodes via a quasi-1D-linear deformation. Additionally, a second order smoothing step has been integrated to act on the boundary during the mesh movement based on the change in its second derivative. This allows for both linear and non-linear shape transformations dependent on the preference of the user. The domain mesh movement is then coupled to the shape boundary movement via a Delaunay graph mapping. A Modified Cuckoo Search (MCS) algorithm is used for optimisation within the prescribed design space defined by the allowed range of control node displacement. A finite volume compressible NavierStokes solver is used for aerodynamic modelling to predict aerodynamic design fitness. The resulting coupled algorithm is applied to a range of test cases in two dimensions including the design of a subsonic, transonic and supersonic intake and the optimisation approach is compared with more conventional optimisation strategies. Ultimately, the algorithm is tested on a three dimensional wing optimisation case.

Keywords: mesh movement, aerodynamic shape optimization, cuckoo search, shape parameterisation

Procedia PDF Downloads 299
12649 Forecasting Cancers Cases in Algeria Using Double Exponential Smoothing Method

Authors: Messis A., Adjebli A., Ayeche R., Talbi M., Tighilet K., Louardiane M.

Abstract:

Cancers are the second cause of death worldwide. Prevalence and incidence of cancers is getting increased by aging and population growth. This study aims to predict and modeling the evolution of breast, Colorectal, Lung, Bladder and Prostate cancers over the period of 2014-2019. In this study, data were analyzed using time series analysis with double exponential smoothing method to forecast the future pattern. To describe and fit the appropriate models, Minitab statistical software version 17 was used. Between 2014 and 2019, the overall trend in the raw number of new cancer cases registered has been increasing over time; the change in observations over time has been increasing. Our forecast model is validated since we have good prediction for the period 2020 and data not available for 2021 and 2022. Time series analysis showed that the double exponential smoothing is an efficient tool to model the future data on the raw number of new cancer cases.

Keywords: cancer, time series, prediction, double exponential smoothing

Procedia PDF Downloads 47
12648 Exponential Spline Solution for Singularly Perturbed Boundary Value Problems with an Uncertain-But-Bounded Parameter

Authors: Waheed Zahra, Mohamed El-Beltagy, Ashraf El Mhlawy, Reda Elkhadrawy

Abstract:

In this paper, we consider singular perturbation reaction-diffusion boundary value problems, which contain a small uncertain perturbation parameter. To solve these problems, we propose a numerical method which is based on an exponential spline and Shishkin mesh discretization. While interval analysis principle is used to deal with the uncertain parameter, sensitivity analysis has been conducted using different methods. Numerical results are provided to show the applicability and efficiency of our method, which is ε-uniform convergence of almost second order.

Keywords: singular perturbation problem, shishkin mesh, two small parameters, exponential spline, interval analysis, sensitivity analysis

Procedia PDF Downloads 244
12647 Modeling and Simulation of a CMOS-Based Analog Function Generator

Authors: Madina Hamiane

Abstract:

Modelling and simulation of an analogy function generator is presented based on a polynomial expansion model. The proposed function generator model is based on a 10th order polynomial approximation of any of the required functions. The polynomial approximations of these functions can then be implemented using basic CMOS circuit blocks. In this paper, a circuit model is proposed that can simultaneously generate many different mathematical functions. The circuit model is designed and simulated with HSPICE and its performance is demonstrated through the simulation of a number of non-linear functions.

Keywords: modelling and simulation, analog function generator, polynomial approximation, CMOS transistors

Procedia PDF Downloads 427
12646 Comparative Study of Equivalent Linear and Non-Linear Ground Response Analysis for Rapar District of Kutch, India

Authors: Kulin Dave, Kapil Mohan

Abstract:

Earthquakes are considered to be the most destructive rapid-onset disasters human beings are exposed to. The amount of loss it brings in is sufficient to take careful considerations for designing of structures and facilities. Seismic Hazard Analysis is one such tool which can be used for earthquake resistant design. Ground Response Analysis is one of the most crucial and decisive steps for seismic hazard analysis. Rapar district of Kutch, Gujarat falls in Zone 5 of earthquake zone map of India and thus has high seismicity because of which it is selected for analysis. In total 8 bore-log data were studied at different locations in and around Rapar district. Different soil engineering properties were analyzed and relevant empirical correlations were used to calculate maximum shear modulus (Gmax) and shear wave velocity (Vs) for the soil layers. The soil was modeled using Pressure-Dependent Modified Kodner Zelasko (MKZ) model and the reference curve used for fitting was Seed and Idriss (1970) for sand and Darendeli (2001) for clay. Both Equivalent linear (EL), as well as Non-linear (NL) ground response analysis, has been carried out with Masing Hysteretic Re/Unloading formulation for comparison. Commercially available DEEPSOIL v. 7.0 software is used for this analysis. In this study an attempt is made to quantify ground response regarding generated acceleration time-history at top of the soil column, Response spectra calculation at 5 % damping and Fourier amplitude spectrum calculation. Moreover, the variation of Peak Ground Acceleration (PGA), Maximum Displacement, Maximum Strain (in %), Maximum Stress Ratio, Mobilized Shear Stress with depth is also calculated. From the study, PGA values estimated in rocky strata are nearly same as bedrock motion and marginal amplification is observed in sandy silt and silty clays by both analyses. The NL analysis gives conservative results of maximum displacement as compared to EL analysis. Maximum strain predicted by both studies is very close to each other. And overall NL analysis is more efficient and realistic because it follows the actual hyperbolic stress-strain relationship, considers stiffness degradation and mobilizes stresses generated due to pore water pressure.

Keywords: DEEPSOIL v 7.0, ground response analysis, pressure-dependent modified Kodner Zelasko model, MKZ model, response spectra, shear wave velocity

Procedia PDF Downloads 105
12645 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data

Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin

Abstract:

The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.

Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test

Procedia PDF Downloads 261
12644 Modified Step Size Patch Array Antenna for UWB Wireless Applications

Authors: Hamid Aslani, Ahmed Radwan

Abstract:

In this paper, a single element microstrip antenna is presented for UWB applications by using techniques as partial ground plane and modified the shape of the patch. The antenna is properly designed to have a compact size and constant gain against frequency. The simulated results have done using two EM software and show good agreement with the measured results for the fabricated antenna. Then a designing of two elements patch antenna array for UWB in the frequency band of 3.1-10 GHz is presented in this paper. The array is constructed by means of feeding two omni-directional modified circular patch elements with a modified power divider. Experimental results show that the array has a stable radiation pattern and low return loss over a broad bandwidth of 64% (3.1–10 GHz). Due to its planar profile, physically compact size, wide impedance bandwidth, directive performance over a wide bandwidth proposed antenna is a good candidate for portable UWB applications and other UWB integrated circuits.

Keywords: ultra wide band, radiation performance, microstrip antenna, size miniaturized antenna

Procedia PDF Downloads 228
12643 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate

Procedia PDF Downloads 341
12642 2D Convolutional Networks for Automatic Segmentation of Knee Cartilage in 3D MRI

Authors: Ananya Ananya, Karthik Rao

Abstract:

Accurate segmentation of knee cartilage in 3-D magnetic resonance (MR) images for quantitative assessment of volume is crucial for studying and diagnosing osteoarthritis (OA) of the knee, one of the major causes of disability in elderly people. Radiologists generally perform this task in slice-by-slice manner taking 15-20 minutes per 3D image, and lead to high inter and intra observer variability. Hence automatic methods for knee cartilage segmentation are desirable and are an active field of research. This paper presents design and experimental evaluation of 2D convolutional neural networks based fully automated methods for knee cartilage segmentation in 3D MRI. The architectures are validated based on 40 test images and 60 training images from SKI10 dataset. The proposed methods segment 2D slices one by one, which are then combined to give segmentation for whole 3D images. Proposed methods are modified versions of U-net and dilated convolutions, consisting of a single step that segments the given image to 5 labels: background, femoral cartilage, tibia cartilage, femoral bone and tibia bone; cartilages being the primary components of interest. U-net consists of a contracting path and an expanding path, to capture context and localization respectively. Dilated convolutions lead to an exponential expansion of receptive field with only a linear increase in a number of parameters. A combination of modified U-net and dilated convolutions has also been explored. These architectures segment one 3D image in 8 – 10 seconds giving average volumetric Dice Score Coefficients (DSC) of 0.950 - 0.962 for femoral cartilage and 0.951 - 0.966 for tibia cartilage, reference being the manual segmentation.

Keywords: convolutional neural networks, dilated convolutions, 3 dimensional, fully automated, knee cartilage, MRI, segmentation, U-net

Procedia PDF Downloads 225
12641 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 53
12640 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 217
12639 On the Construction of Some Optimal Binary Linear Codes

Authors: Skezeer John B. Paz, Ederlina G. Nocon

Abstract:

Finding an optimal binary linear code is a central problem in coding theory. A binary linear code C = [n, k, d] is called optimal if there is no linear code with higher minimum distance d given the length n and the dimension k. There are bounds giving limits for the minimum distance d of a linear code of fixed length n and dimension k. The lower bound which can be taken by construction process tells that there is a known linear code having this minimum distance. The upper bound is given by theoretic results such as Griesmer bound. One way to find an optimal binary linear code is to make the lower bound of d equal to its higher bound. That is, to construct a binary linear code which achieves the highest possible value of its minimum distance d, given n and k. Some optimal binary linear codes were presented by Andries Brouwer in his published table on bounds of the minimum distance d of binary linear codes for 1 ≤ n ≤ 256 and k ≤ n. This was further improved by Markus Grassl by giving a detailed construction process for each code exhibiting the lower bound. In this paper, we construct new optimal binary linear codes by using some construction processes on existing binary linear codes. Particularly, we developed an algorithm applied to the codes already constructed to extend the list of optimal binary linear codes up to 257 ≤ n ≤ 300 for k ≤ 7.

Keywords: bounds of linear codes, Griesmer bound, construction of linear codes, optimal binary linear codes

Procedia PDF Downloads 714
12638 Licensing in a Hotelling Model with Quadratic Transportation Costs

Authors: Fehmi Bouguezzi

Abstract:

This paper studies optimal licensing regimes in a linear Hotelling model where firms are located at the end points of the city and where the transportation cost is not linear but quadratic. We study for that a more general cost function and we try to compare the findings with the results of the linear cost. We find the same optimal licensing regimes. A per unit royalty is optimal when innovation is not drastic and no licensing is better when innovation is drastic. We also find that no licensing is always better than fixed fee licensing.

Keywords: Hotelling model, technology transfer, patent licensing, quadratic transportation cost

Procedia PDF Downloads 317
12637 Hyperspectral Image Classification Using Tree Search Algorithm

Authors: Shreya Pare, Parvin Akhter

Abstract:

Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.

Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm

Procedia PDF Downloads 135
12636 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation

Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes

Abstract:

The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.

Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization

Procedia PDF Downloads 272
12635 Effect of Monotonically Decreasing Parameters on Margin Softmax for Deep Face Recognition

Authors: Umair Rashid

Abstract:

Normally softmax loss is used as the supervision signal in face recognition (FR) system, and it boosts the separability of features. In the last two years, a number of techniques have been proposed by reformulating the original softmax loss to enhance the discriminating power of Deep Convolutional Neural Networks (DCNNs) for FR system. To learn angularly discriminative features Cosine-Margin based softmax has been adjusted as monotonically decreasing angular function, that is the main challenge for angular based softmax. On that issue, we propose monotonically decreasing element for Cosine-Margin based softmax and also, we discussed the effect of different monotonically decreasing parameters on angular Margin softmax for FR system. We train the model on publicly available dataset CASIA- WebFace via our proposed monotonically decreasing parameters for cosine function and the tests on YouTube Faces (YTF, Labeled Face in the Wild (LFW), VGGFace1 and VGGFace2 attain the state-of-the-art performance.

Keywords: deep convolutional neural networks, cosine margin face recognition, softmax loss, monotonically decreasing parameter

Procedia PDF Downloads 55
12634 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)

Authors: Davood Rajabi, Mojgan Yazdani

Abstract:

Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.

Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood

Procedia PDF Downloads 472
12633 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization

Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman

Abstract:

The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.

Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation

Procedia PDF Downloads 326
12632 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model

Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele

Abstract:

The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.

Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.

Procedia PDF Downloads 15
12631 Chaotic Search Optimal Design and Modeling of Permanent Magnet Synchronous Linear Motor

Authors: Yang Yi-Fei, Luo Min-Zhou, Zhang Fu-Chun, He Nai-Bao, Xing Shao-Bang

Abstract:

This paper presents an electromagnetic finite element model of permanent magnet synchronous linear motor and distortion rate of the air gap flux density waveform is analyzed in detail. By designing the sample space of the parameters, nonlinear regression modeling of the orthogonal experimental design is introduced. We put forward for possible air gap flux density waveform sine electromagnetic scheme. Parameters optimization of the permanent magnet synchronous linear motor is also introduced which is based on chaotic search and adaptation function. Simulation results prove that the pole shifting does not affect the motor back electromotive symmetry based on the structural parameters, it provides a novel way for the optimum design of permanent magnet synchronous linear motor and other engineering.

Keywords: permanent magnet synchronous linear motor, finite element analysis, chaotic search, optimization design

Procedia PDF Downloads 382
12630 Soil Rehabilitation Using Modified Diatomite: Assessing Chemical Properties, Enzymatic Reactions and Heavy Metal Immobilization

Authors: Maryam Samani. Ahmad Golchin. Hosseinali Alikkani. Ahmad Baybordi

Abstract:

Natural diatomite was modified by grinding and acid treatment to increase surface area and to decrease the impurities. Surface area and pore volume of the modified diatomite were 67.45 m² g-1 and 0.105 cm³ g-¹ respectively, and used to immobilize Pb, Zn and Cu in an urban soil. The modified diatomite was added to soil samples at the rates of 2.5, 5, 7.5 and 10% and the samples incubated for 60 days. The addition of modified diatomite increased SSA of the soil. The SSAs of soils with 2.5, 5.0, 7.5 and 10% modified diatomite were 20.82, 22.02, 23.21 and 24.41 m² g-¹ respectively. Increasing the SSAs of the soils by the application of modified diatomite reduced the DTPA extractable concentrations of heavy metals compared with un-amendment control. The concentration of Pb, Zn and Cu were reduced by 91.1%, 82% and 91.1% respectively. Modified diatomite reduced the concentration of Exchangeable and Carbonate bounded species of Pb, Zn and Cu, compared with the control. Also significantly increased the concentration of Fe Mn- OX (Fe-Mn Oxides) and OM (Organic Matter) bound and Res (Residual) fraction. Modified diatomite increased the urease, dehydrogenase and alkaline phosphatase activity by 52%, 57% and 56.6% respectively.

Keywords: modified diatomite, chemical specifications, specific surface area, enzyme activity, immobilization, heavy metal, soil remediation

Procedia PDF Downloads 20
12629 Analysis of the Relationship between the Unitary Impulse Response for the nth-Volterra Kernel of a Duffing Oscillator System

Authors: Guillermo Manuel Flores Figueroa, Juan Alejandro Vazquez Feijoo, Jose Navarro Antonio

Abstract:

A continuous nonlinear system response may be obtained by an infinite sum of the so-called Volterra operators. Each operator is obtained from multidimensional convolution of nth-order between the nth-order Volterra kernel and the system input. These operators can also be obtained from the Associated Linear Equations (ALEs) that are linear models of subsystems which inputs and outputs are of the same nth-order. Each ALEs produces a particular nth-Volterra operator. As linear models a unitary impulse response can be obtained from them. This work shows the relationship between this unitary impulse responses and the corresponding order Volterra kernel.

Keywords: Volterra series, frequency response functions FRF, associated linear equations ALEs, unitary response function, Voterra kernel

Procedia PDF Downloads 621
12628 Effects of Variable Viscosity on Radiative MHD Flow in a Porous Medium Between Twovertical Wavy Walls

Authors: A. B. Disu, M. S. Dada

Abstract:

This study was conducted to investigate two dimensional heat transfer of a free convective-radiative MHD (Magneto-hydrodynamics) flow with temperature dependent viscosity and heat source of a viscous incompressible fluid in a porous medium between two vertical wavy walls. The fluid viscosity is assumed to vary as an exponential function of temperature. The flow is assumed to consist of a mean part and a perturbed part. The perturbed quantities were expressed in terms of complex exponential series of plane wave equation. The resultant differential equations were solved by Differential Transform Method (DTM). The numerical computations were presented graphically to show the salient features of the fluid flow and heat transfer characteristics. The skin friction and Nusselt number were also analyzed for various governing parameters.

Keywords: differential transform method, MHD free convection, porous medium, two dimensional radiation, two wavy walls

Procedia PDF Downloads 417
12627 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models

Authors: Chad Goldsworthy, B. Rajeswari Matam

Abstract:

The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.

Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation

Procedia PDF Downloads 141
12626 Composite Forecasts Accuracy for Automobile Sales in Thailand

Authors: Watchareeporn Chaimongkol

Abstract:

In this paper, we compare the statistical measures accuracy of composite forecasting model to estimate automobile customer demand in Thailand. A modified simple exponential smoothing and autoregressive integrate moving average (ARIMA) forecasting model is built to estimate customer demand of passenger cars, instead of using information of historical sales data. Our model takes into account special characteristic of the Thai automobile market such as sales promotion, advertising and publicity, petrol price, and interest rate for loan. We evaluate our forecasting model by comparing forecasts with actual data using six accuracy measurements, mean absolute percentage error (MAPE), geometric mean absolute error (GMAE), symmetric mean absolute percentage error (sMAPE), mean absolute scaled error (MASE), median relative absolute error (MdRAE), and geometric mean relative absolute error (GMRAE).

Keywords: composite forecasting, simple exponential smoothing model, autoregressive integrate moving average model selection, accuracy measurements

Procedia PDF Downloads 330
12625 Dynamics of a Reaction-Diffusion Problems Modeling Two Predators Competing for a Prey

Authors: Owolabi Kolade Matthew

Abstract:

In this work, we investigate both the analytical and numerical studies of the dynamical model comprising of three species system. We analyze the linear stability of stationary solutions in the one-dimensional multi-system modeling the interactions of two predators and one prey species. The stability analysis has a lot of implications for understanding the various spatiotemporal and chaotic behaviors of the species in the spatial domain. The analysis results presented have established the possibility of the three interacting species to coexist harmoniously, this feat is achieved by combining the local and global analyzes to determine the global dynamics of the system. In the presence of diffusion, a viable exponential time differencing method is applied to multi-species nonlinear time-dependent partial differential equation to address the points and queries that may naturally arise. The scheme is described in detail, and justified by a number of computational experiments.

Keywords: asymptotically stable, coexistence, exponential time differencing method, global and local stability, predator-prey model, nonlinear, reaction-diffusion system

Procedia PDF Downloads 384
12624 Analytical Downlink Effective SINR Evaluation in LTE Networks

Authors: Marwane Ben Hcine, Ridha Bouallegue

Abstract:

The aim of this work is to provide an original analytical framework for downlink effective SINR evaluation in LTE networks. The classical single carrier SINR performance evaluation is extended to multi-carrier systems operating over frequency selective channels. Extension is achieved by expressing the link outage probability in terms of the statistics of the effective SINR. For effective SINR computation, the exponential effective SINR mapping (EESM) method is used on this work. Closed-form expression for the link outage probability is achieved assuming a log skew normal approximation for single carrier case. Then we rely on the lognormal approximation to express the exponential effective SINR distribution as a function of the mean and standard deviation of the SINR of a generic subcarrier. Achieved formulas is easily computable and can be obtained for a user equipment (UE) located at any distance from its serving eNodeB. Simulations show that the proposed framework provides results with accuracy within 0.5 dB.

Keywords: LTE, OFDMA, effective SINR, log skew normal approximation

Procedia PDF Downloads 329