Search results for: estimation algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5169

Search results for: estimation algorithm

4359 Digital Control Algorithm Based on Delta-Operator for High-Frequency DC-DC Switching Converters

Authors: Renkai Wang, Tingcun Wei

Abstract:

In this paper, a digital control algorithm based on delta-operator is presented for high-frequency digitally-controlled DC-DC switching converters. The stability and the controlling accuracy of the DC-DC switching converters are improved by using the digital control algorithm based on delta-operator without increasing the hardware circuit scale. The design method of voltage compensator in delta-domain using PID (Proportion-Integration- Differentiation) control is given in this paper, and the simulation results based on Simulink platform are provided, which have verified the theoretical analysis results very well. It can be concluded that, the presented control algorithm based on delta-operator has better stability and controlling accuracy, and easier hardware implementation than the existed control algorithms based on z-operator, therefore it can be used for the voltage compensator design in high-frequency digitally- controlled DC-DC switching converters.

Keywords: digitally-controlled DC-DC switching converter, digital voltage compensator, delta-operator, finite word length, stability

Procedia PDF Downloads 400
4358 Application of Fourier Series Based Learning Control on Mechatronic Systems

Authors: Sandra Baßler, Peter Dünow, Mathias Marquardt

Abstract:

A Fourier series based learning control (FSBLC) algorithm for tracking trajectories of mechanical systems with unknown nonlinearities is presented. Two processes are introduced to which the FSBLC with PD controller is applied. One is a simplified service robot capable of climbing stairs due to special wheels and the other is a propeller driven pendulum with nearly the same requirements on control. Additionally to the investigation of learning the feed forward for the desired trajectories some considerations on the implementation of such an algorithm on low cost microcontroller hardware are made. Simulations of the service robot as well as practical experiments on the pendulum show the capability of the used FSBLC algorithm to perform the task of improving control behavior for repetitive task of such mechanical systems.

Keywords: climbing stairs, FSBLC, ILC, service robot

Procedia PDF Downloads 302
4357 A System Dynamics Approach to Technological Learning Impact for Cost Estimation of Solar Photovoltaics

Authors: Rong Wang, Sandra Hasanefendic, Elizabeth von Hauff, Bart Bossink

Abstract:

Technological learning and learning curve models have been continuously used to estimate the photovoltaics (PV) cost development over time for the climate mitigation targets. They can integrate a number of technological learning sources which influence the learning process. Yet the accuracy and realistic predictions for cost estimations of PV development are still difficult to achieve. This paper develops four hypothetical-alternative learning curve models by proposing different combinations of technological learning sources, including both local and global technology experience and the knowledge stock. This paper specifically focuses on the non-linear relationship between the costs and technological learning source and their dynamic interaction and uses the system dynamics approach to predict a more accurate PV cost estimation for future development. As the case study, the data from China is gathered and drawn to illustrate that the learning curve model that incorporates both the global and local experience is more accurate and realistic than the other three models for PV cost estimation. Further, absorbing and integrating the global experience into the local industry has a positive impact on PV cost reduction. Although the learning curve model incorporating knowledge stock is not realistic for current PV cost deployment in China, it still plays an effective positive role in future PV cost reduction.

Keywords: photovoltaic, system dynamics, technological learning, learning curve

Procedia PDF Downloads 87
4356 Finding Data Envelopment Analysis Target Using the Multiple Objective Linear Programming Structure in Full Fuzzy Case

Authors: Raziyeh Shamsi

Abstract:

In this paper, we present a multiple objective linear programming (MOLP) problem in full fuzzy case and find Data Envelopment Analysis(DEA) targets. In the presented model, we are seeking the least inputs and the most outputs in the production possibility set (PPS) with the variable return to scale (VRS) assumption, so that the efficiency projection is obtained for all decision making units (DMUs). Then, we provide an algorithm for finding DEA targets interactively in the full fuzzy case, which solves the full fuzzy problem without defuzzification. Owing to the use of interactive methods, the targets obtained by our algorithm are more applicable, more realistic, and they are according to the wish of the decision maker. Finally, an application of the algorithm in 21 educational institutions is provided.

Keywords: DEA, MOLP, full fuzzy, target

Procedia PDF Downloads 294
4355 A New Method to Winner Determination for Economic Resource Allocation in Cloud Computing Systems

Authors: Ebrahim Behrouzian Nejad, Rezvan Alipoor Sabzevari

Abstract:

Cloud computing systems are large-scale distributed systems, so that they focus more on large scale resource sharing, cooperation of several organizations and their use in new applications. One of the main challenges in this realm is resource allocation. There are many different ways to resource allocation in cloud computing. One of the common methods to resource allocation are economic methods. Among these methods, the auction-based method has greater prominence compared with Fixed-Price method. The double combinatorial auction is one of the proper ways of resource allocation in cloud computing. This method includes two phases: winner determination and resource allocation. In this paper a new method has been presented to determine winner in double combinatorial auction-based resource allocation using Imperialist Competitive Algorithm (ICA). The experimental results show that in our new proposed the number of winner users is higher than genetic algorithm. On other hand, in proposed algorithm, the number of winner providers is higher in genetic algorithm.

Keywords: cloud computing, resource allocation, double auction, winner determination

Procedia PDF Downloads 352
4354 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 127
4353 RP-HPLC Method Development and Its Validation for Simultaneous Estimation of Metoprolol Succinate and Olmesartan Medoxomil Combination in Bulk and Tablet Dosage Form

Authors: S. Jain, R. Savalia, V. Saini

Abstract:

A simple, accurate, precise, sensitive and specific RP-HPLC method was developed and validated for simultaneous estimation of Metoprolol Succinate and Olmesartan Medoxomil in bulk and tablet dosage form. The RP-HPLC method has shown adequate separation for Metoprolol Succinate and Olmesartan Medoxomil from its degradation products. The separation was achieved on a Phenomenex luna ODS C18 (250mm X 4.6mm i.d., 5μm particle size) with an isocratic mixture of acetonitrile: 50mM phosphate buffer pH 4.0 adjusted with glacial acetic acid in the ratio of 55:45 v/v. The mobile phase at a flow rate of 1.0ml/min, Injection volume 20μl and wavelength of detection was kept at 225nm. The retention time for Metoprolol Succinate and Olmesartan Medoxomil was 2.451±0.1min and 6.167±0.1min, respectively. The linearity of the proposed method was investigated in the range of 5-50μg/ml and 2-20μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively. Correlation coefficient was 0.999 and 0.9996 for Metoprolol Succinate and Olmesartan Medoxomil, respectively. The limit of detection was 0.2847μg/ml and 0.1251μg/ml for Metoprolol Succinate and Olmesartan Medoxomil, respectively and the limit of quantification was 0.8630μg/ml and 0.3793μg/ml for Metoprolol and Olmesartan, respectively. Proposed methods were validated as per ICH guidelines for linearity, accuracy, precision, specificity and robustness for estimation of Metoprolol Succinate and Olmesartan Medoxomil in commercially available tablet dosage form and results were found to be satisfactory. Thus the developed and validated stability indicating method can be used successfully for marketed formulations.

Keywords: metoprolol succinate, olmesartan medoxomil, RP-HPLC method, validation, ICH

Procedia PDF Downloads 302
4352 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

Authors: Fanqiang Kong, Chending Bian

Abstract:

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

Keywords: hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation

Procedia PDF Downloads 243
4351 Dynamic Measurement System Modeling with Machine Learning Algorithms

Authors: Changqiao Wu, Guoqing Ding, Xin Chen

Abstract:

In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.

Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent

Procedia PDF Downloads 116
4350 Stature and Gender Estimation Using Foot Measurements in South Indian Population

Authors: Jagadish Rao Padubidri, Mehak Bhandary, Sowmya J. Rao

Abstract:

Introduction: The significance of the human foot and its measurements in identifying an individual has been proved a lot of times by different studies in different geographical areas and its association to the stature and gender of the individual has been justified by many researches. In our study we have used different foot measurements including the length, width, malleol height and navicular height for establishing its association to stature and gender and to find out its accuracy. The purpose of this study is to show the relation of foot measurements with stature and gender, and to derive Multiple and Logistic regression equations for stature and gender estimation in South Indian population. Materials and Methods: The subjects for this study were 200 South Indian students out of which 100 were females and 100 were males, aged between 18 to 24 years. The data for the present study included the stature, foot length, foot breath, foot malleol height, foot navicular height of both right and left foot. Descriptive statistics, T-test and Pearson correlation coefficients were derived between stature, gender and foot measurements. The stature was estimated from right and left foot measurements for both male and female South Indian population using multiple regression analysis and logistic regression analysis for gender estimation. Results: The means, standard deviation, stature, right and left foot measurements and T-test in male population were higher than in females. LFL (Left foot length) is more than RFL (Right Foot length) in male groups, but in female groups the length of both foot are almost equal [RFL=226.6, LFL=227.1]. There is not much of difference in means of RFW (Right foot width) and LFW (Left foot width) in both the genders. Significant difference were seen in mean values of malleol and navicular height of right and left feet in male gender. No such difference was seen in female subjects. Conclusions: The study has successfully demonstrated the correlation of foot length in stature estimation in all the three study groups in both right and left foot. Next in parameters are Foot width and malleol height in estimating stature among male and female groups. Navicular height of both right and left foot showed poor relationship with stature estimation in both male and female groups. Multiple regression equations for both right and left foot measurements to estimate stature were derived with standard error ranging from 11-12 cm in males and 10-11 cm in females. The SEE was 5.8 when both male and female groups were pooled together. The logistic regression model which was derived to determine gender showed 85% accuracy and 92.5% accuracy using right and left foot measurements respectively. We believe that stature and gender can be estimated with foot measurements in South Indian population.

Keywords: foot length, gender, stature, South Indian

Procedia PDF Downloads 327
4349 State Estimation Based on Unscented Kalman Filter for Burgers’ Equation

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

Controlling the flow of fluids is a challenging problem that arises in many fields. Burgers’ equation is a fundamental equation for several flow phenomena such as traffic, shock waves, and turbulence. The optimal feedback control method, so-called model predictive control, has been proposed for Burgers’ equation. However, the model predictive control method is inapplicable to systems whose all state variables are not exactly known. In practical point of view, it is unusual that all the state variables of systems are exactly known, because the state variables of systems are measured through output sensors and limited parts of them can be only available. In fact, it is usual that flow velocities of fluid systems cannot be measured for all spatial domains. Hence, any practical feedback controller for fluid systems must incorporate some type of state estimator. To apply the model predictive control to the fluid systems described by Burgers’ equation, it is needed to establish a state estimation method for Burgers’ equation with limited measurable state variables. To this purpose, we apply unscented Kalman filter for estimating the state variables of fluid systems described by Burgers’ equation. The objective of this study is to establish a state estimation method based on unscented Kalman filter for Burgers’ equation. The effectiveness of the proposed method is verified by numerical simulations.

Keywords: observer systems, unscented Kalman filter, nonlinear systems, Burgers' equation

Procedia PDF Downloads 144
4348 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising

Authors: Jianwei Ma, Diriba Gemechu

Abstract:

In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.

Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm

Procedia PDF Downloads 199
4347 A Bathtub Curve from Nonparametric Model

Authors: Eduardo C. Guardia, Jose W. M. Lima, Afonso H. M. Santos

Abstract:

This paper presents a nonparametric method to obtain the hazard rate “Bathtub curve” for power system components. The model is a mixture of the three known phases of a component life, the decreasing failure rate (DFR), the constant failure rate (CFR) and the increasing failure rate (IFR) represented by three parametric Weibull models. The parameters are obtained from a simultaneous fitting process of the model to the Kernel nonparametric hazard rate curve. From the Weibull parameters and failure rate curves the useful lifetime and the characteristic lifetime were defined. To demonstrate the model the historic time-to-failure of distribution transformers were used as an example. The resulted “Bathtub curve” shows the failure rate for the equipment lifetime which can be applied in economic and replacement decision models.

Keywords: bathtub curve, failure analysis, lifetime estimation, parameter estimation, Weibull distribution

Procedia PDF Downloads 433
4346 Flame Volume Prediction and Validation for Lean Blowout of Gas Turbine Combustor

Authors: Ejaz Ahmed, Huang Yong

Abstract:

The operation of aero engines has a critical importance in the vicinity of lean blowout (LBO) limits. Lefebvre’s model of LBO based on empirical correlation has been extended to flame volume concept by the authors. The flame volume takes into account the effects of geometric configuration, the complex spatial interaction of mixing, turbulence, heat transfer and combustion processes inside the gas turbine combustion chamber. For these reasons, flame volume based LBO predictions are more accurate. Although LBO prediction accuracy has improved, it poses a challenge associated with Vf estimation in real gas turbine combustors. This work extends the approach of flame volume prediction previously based on fuel iterative approximation with cold flow simulations to reactive flow simulations. Flame volume for 11 combustor configurations has been simulated and validated against experimental data. To make prediction methodology robust as required in the preliminary design stage, reactive flow simulations were carried out with the combination of probability density function (PDF) and discrete phase model (DPM) in FLUENT 15.0. The criterion for flame identification was defined. Two important parameters i.e. critical injection diameter (Dp,crit) and critical temperature (Tcrit) were identified, and their influence on reactive flow simulation was studied for Vf estimation. Obtained results exhibit ±15% error in Vf estimation with experimental data.

Keywords: CFD, combustion, gas turbine combustor, lean blowout

Procedia PDF Downloads 258
4345 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation

Authors: Serge B. Provost, Yishan Zhang

Abstract:

A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.

Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation

Procedia PDF Downloads 155
4344 An Efficient Propensity Score Method for Causal Analysis With Application to Case-Control Study in Breast Cancer Research

Authors: Ms Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner

Abstract:

Propensity score (PS) methods have recently become the standard analysis as a tool for the causal inference in the observational studies where exposure is not randomly assigned, thus, confounding can impact the estimation of treatment effect on the outcome. For the binary outcome, the effect of treatment on the outcome can be estimated by odds ratios, relative risks, and risk differences. However, using the different PS methods may give you a different estimation of the treatment effect on the outcome. Several methods of PS analyses have been used mainly, include matching, inverse probability of weighting, stratification, and covariate adjusted on PS. Due to the dangers of discretizing continuous variables (exposure, covariates), the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect (ATE) utilizing the stratification of PS method. Therefore, we are trying to avoid choosing arbitrary cut-points, instead, we continuously discretize the PS and accumulate information across all cut-points for inferences. We will use Monte Carlo simulation to evaluate ATE, focusing on two PS methods, stratification and covariate adjusted on PS. We will then show how this can be observed based on the analyses of the data from a case-control study of breast cancer, the Polish Women’s Health Study.

Keywords: average treatment effect, propensity score, stratification, covariate adjusted, monte Calro estimation, breast cancer, case_control study

Procedia PDF Downloads 93
4343 On Confidence Intervals for the Difference between Inverse of Normal Means with Known Coefficients of Variation

Authors: Arunee Wongkhao, Suparat Niwitpong, Sa-aat Niwitpong

Abstract:

In this paper, we propose two new confidence intervals for the difference between the inverse of normal means with known coefficients of variation. One of these two confidence intervals for this problem is constructed based on the generalized confidence interval and the other confidence interval is constructed based on the closed form method of variance estimation. We examine the performance of these confidence intervals in terms of coverage probabilities and expected lengths via Monte Carlo simulation.

Keywords: coverage probability, expected length, inverse of normal mean, coefficient of variation, generalized confidence interval, closed form method of variance estimation

Procedia PDF Downloads 301
4342 Solving the Wireless Mesh Network Design Problem Using Genetic Algorithm and Simulated Annealing Optimization Methods

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Bahgat A. Abdullatif, Ahmed M. Rabie

Abstract:

Mesh clients, mesh routers and gateways are components of Wireless Mesh Network (WMN). In WMN, gateways connect to Internet using wireline links and supply Internet access services for users. We usually need multiple gateways, which takes time and costs a lot of money set up, due to the limited wireless channel bit rate. WMN is a highly developed technology that offers to end users a wireless broadband access. It offers a high degree of flexibility contrasted to conventional networks; however, this attribute comes at the expense of a more complex construction. Therefore, a challenge is the planning and optimization of WMNs. In this paper, we concentrate on this challenge using a genetic algorithm and simulated annealing. The genetic algorithm and simulated annealing enable searching for a low-cost WMN configuration with constraints and determine the number of used gateways. Experimental results proved that the performance of the genetic algorithm and simulated annealing in minimizing WMN network costs while satisfying quality of service. The proposed models are presented to significantly outperform the existing solutions.

Keywords: wireless mesh networks, genetic algorithms, simulated annealing, topology design

Procedia PDF Downloads 447
4341 Data Driven Infrastructure Planning for Offshore Wind farms

Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree

Abstract:

The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.

Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data

Procedia PDF Downloads 70
4340 Applying Genetic Algorithm in Exchange Rate Models Determination

Authors: Mehdi Rostamzadeh

Abstract:

Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.

Keywords: exchange rate, genetic algorithm, fundamental models, technical models

Procedia PDF Downloads 262
4339 Signal Processing of the Blood Pressure and Characterization

Authors: Hadj Abd El Kader Benghenia, Fethi Bereksi Reguig

Abstract:

In clinical medicine, blood pressure, raised blood hemodynamic monitoring is rich pathophysiological information of cardiovascular system, of course described through factors such as: blood volume, arterial compliance and peripheral resistance. In this work, we are interested in analyzing these signals to propose a detection algorithm to delineate the different sequences and especially systolic blood pressure (SBP), diastolic blood pressure (DBP), and the wave and dicrotic to do their analysis in order to extract the cardiovascular parameters.

Keywords: blood pressure, SBP, DBP, detection algorithm

Procedia PDF Downloads 425
4338 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System

Authors: Saeed Poormoaied, Zumbul Atan

Abstract:

Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.

Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.

Procedia PDF Downloads 131
4337 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 503
4336 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging

Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig

Abstract:

A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.

Keywords: clogging, continuous casting, inclusion, simulation, submerged entry nozzle

Procedia PDF Downloads 274
4335 Solving Flowshop Scheduling Problems with Ant Colony Optimization Heuristic

Authors: Arshad Mehmood Ch, Riaz Ahmad, Imran Ali Ch, Waqas Durrani

Abstract:

This study deals with the application of Ant Colony Optimization (ACO) approach to solve no-wait flowshop scheduling problem (NW-FSSP). ACO algorithm so developed has been coded on Matlab computer application. The paper covers detailed steps to apply ACO and focuses on judging the strength of ACO in relation to other solution techniques previously applied to solve no-wait flowshop problem. The general purpose approach was able to find reasonably accurate solutions for almost all the problems under consideration and was able to handle a fairly large spectrum of problems with far reduced CPU effort. Careful scrutiny of the results reveals that the algorithm presented results better than other approaches like Genetic algorithm and Tabu Search heuristics etc; earlier applied to solve NW-FSSP data sets.

Keywords: no-wait, flowshop, scheduling, ant colony optimization (ACO), makespan

Procedia PDF Downloads 425
4334 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 182
4333 The Use of a Rabbit Model to Evaluate the Influence of Age on Excision Wound Healing

Authors: S. Bilal, S. A. Bhat, I. Hussain, J. D. Parrah, S. P. Ahmad, M. R. Mir

Abstract:

Background: The wound healing involves a highly coordinated cascade of cellular and immunological response over a period including coagulation, inflammation, granulation tissue formation, epithelialization, collagen synthesis and tissue remodeling. Wounds in aged heal more slowly than those in younger, mainly because of comorbidities that occur as one age. The present study is about the influence of age on wound healing. 1x1cm^2 (100 mm) wounds were created on the back of the animal. The animals were divided into two groups; one group had animals in the age group of 3-9 months while another group had animals in the age group of 15-21 months. Materials and Methods: 24 clinically healthy rabbits in the age group of 3-21 months were used as experimental animals and divided into two groups viz A and B. All experimental parameters, i.e., Excision wound model, Measurement of wound area, Protein extraction and estimation, Protein extraction and estimation and DNA extraction and estimation were done by standard methods. Results: The parameters studied were wound contraction, hydroxyproline, glucosamine, protein, and DNA. A significant increase (p<0.005) in the hydroxyproline, glucosamine, protein and DNA and a significant decrease in wound area (p<0.005) was observed in the age group of 3-9 months when compared to animals of an age group of 15-21 months. Wound contraction together with hydroxyproline, glucosamine, protein and DNA estimations suggest that advanced age results in retarded wound healing. Conclusion: The decrease wound contraction and accumulation of hydroxyproline, glucosamine, protein and DNA in group B animals may be associated with the reduction or delay in growth factors because of the advancing age.

Keywords: age, wound healing, excision wound, hydroxyproline, glucosamine

Procedia PDF Downloads 648
4332 In Agile Projects - Arithmetic Sequence is More Effective than Fibonacci Sequence to Use for Estimating the Implementation Effort of User Stories

Authors: Khaled Jaber

Abstract:

The estimation of effort in software development is a complex task. The traditional Waterfall approach used to develop software systems requires a lot of time to estimate the effort needed to implement user requirements. Agile manifesto, however, is currently more used in the industry than the Waterfall to develop software systems. In Agile, the user requirement is referred to as a user story. Agile teams mostly use the Fibonacci sequence 1, 2, 3, 5, 8, 11, etc. in estimating the effort needed to implement the user story. This work shows through analysis that the Arithmetic sequence, e.g., 3, 6, 9, 12, etc., is more effective than the Fibonacci sequence in estimating the user stories. This paper mathematically and visually proves the effectiveness of the Arithmetic sequence over the FB sequence.

Keywords: agie, scrum, estimation, fibonacci sequence

Procedia PDF Downloads 188
4331 Item Response Calibration/Estimation: An Approach to Adaptive E-Learning System Development

Authors: Adeniran Adetunji, Babalola M. Florence, Akande Ademola

Abstract:

In this paper, we made an overview on the concept of adaptive e-Learning system, enumerates the elements of adaptive learning concepts e.g. A pedagogical framework, multiple learning strategies and pathways, continuous monitoring and feedback on student performance, statistical inference to reach final learning strategy that works for an individual learner by “mass-customization”. Briefly highlights the motivation of this new system proposed for effective learning teaching. E-Review literature on the concept of adaptive e-learning system and emphasises on the Item Response Calibration, which is an important approach to developing an adaptive e-Learning system. This paper write-up is concluded on the justification of item response calibration/estimation towards designing a successful and effective adaptive e-Learning system.

Keywords: adaptive e-learning system, pedagogical framework, item response, computer applications

Procedia PDF Downloads 580
4330 An Android Application for ECG Monitoring and Evaluation Using Pan-Tompkins Algorithm

Authors: Cebrail Çiflikli, Emre Öner Tartan

Abstract:

Parallel to the fast worldwide increase of elderly population and spreading unhealthy life habits, there is a significant rise in the number of patients and health problems. The supervision of people who have health problems and oversight in detection of people who have potential risks, bring a considerable cost to health system and increase workload of physician. To provide an efficient solution to this problem, in the recent years mobile applications have shown their potential for wide usage in health monitoring. In this paper we present an Android mobile application that records and evaluates ECG signal using Pan-Tompkins algorithm for QRS detection. The application model includes an alarm mechanism that is proposed to be used for sending message including abnormality information and location information to health supervisor.

Keywords: Android mobile application, ECG monitoring, QRS detection, Pan-Tompkins Algorithm

Procedia PDF Downloads 223