Search results for: optimal overbooking level
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4789

Search results for: optimal overbooking level

4279 The Mechanistic and Oxidative Study of Methomyl and Parathion Degradation by Fenton Process

Authors: Chihhao Fan, Ming-Chu Liao

Abstract:

The purpose of this study is to investigate the chemical degradation of the organophosphorus pesticide of parathion and carbamate insecticide of methomyl in the aqueous phase through Fenton process. With the employment of batch Fenton process, the degradation of the two selected pesticides at different pH, initial concentration, humic acid concentration, and Fenton reagent dosages was explored. The Fenton process was found effective to degrade parathion and methomyl. The optimal dosage of Fenton reagents (i.e., molar concentration ratio of H2O2 to Fe2+) at pH 7 for parathion degradation was equal to 3, which resulted in 50% removal of parathion. Similarly, the optimal dosage for methomyl degradation was 1, resulting in 80% removal of methomyl. This study also found that the presence of humic substances has enhanced pesticide degradation by Fenton process significantly. The mass spectroscopy results showed that the hydroxyl free radical may attack the single bonds with least energy of investigated pesticides to form smaller molecules which is more easily to degrade either through physio-chemical or bilolgical processes.

Keywords: Fenton Process, humic acid, methomyl, parathion, pesticides

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618
4278 SELF-Cured Alkali Activated Slag Concrete Mixes- An Experimental Study

Authors: Mithun B. M., Mattur C. Narasimhan

Abstract:

Alkali Activated Slag Concrete (AASC) mixes are manufactured by activating ground granulated blast furnace slag (GGBFS) using sodium hydroxide and sodium silicate solutions. The aim of the present experimental research was to investigate the effect of increasing the dosages of sodium oxide (Na2O, in the range of 4 to 8%) and the activator modulus (Ms) (i.e. the SiO2/Na2O ratio, in the range of 0.5 to 1.5) of the alkaline solutions, on the workability and strength characteristics of self-cured (air-cured) alkali activated Indian slag concrete mixes. Further the split tensile and flexure strengths for optimal mixes were studied for each dosage of Na2O.It is observed that increase in Na2O concentration increases the compressive, split-tensile and flexural strengths, both at the early and later-ages, while increase in Ms, decreases the workability of the mixes. An optimal Ms of 1.25 is found at various Na2O dosages. No significant differences in the strength performances were observed between AASCs manufactured with alkali solutions prepared using either of potable and de-ionized water.

Keywords: Alkali activated slag, self-curing, strength characteristics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2996
4277 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1077
4276 Dynamic State Estimation with Optimal PMU and Conventional Measurements for Complete Observability

Authors: M. Ravindra, R. Srinivasa Rao

Abstract:

This paper presents a Generalized Binary Integer Linear Programming (GBILP) method for optimal allocation of Phasor Measurement Units (PMUs) and to generate Dynamic State Estimation (DSE) solution with complete observability. The GBILP method is formulated with Zero Injection Bus (ZIB) constraints to reduce the number of locations for placement of PMUs in the case of normal and single line contingency. The integration of PMU and conventional measurements is modeled in DSE process to estimate accurate states of the system. To estimate the dynamic behavior of the power system with proposed method, load change up to 40% considered at a bus in the power system network. The proposed DSE method is compared with traditional Weighted Least Squares (WLS) state estimation method in presence of load changes to show the impact of PMU measurements. MATLAB simulations are carried out on IEEE 14, 30, 57, and 118 bus systems to prove the validity of the proposed approach.

Keywords: Observability, phasor measurement units, PMU, state estimation, dynamic state estimation, SCADA measurements, zero injection bus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238
4275 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2132
4274 Segmentation of Cardiac Images by the Force Field Driven Speed Term

Authors: Renato Dedic, Madjid Allili, Roger Lecomte, Adbelhamid Benchakroun

Abstract:

The class of geometric deformable models, so-called level sets, has brought tremendous impact to medical imagery. In this paper we present yet another application of level sets to medical imaging. The method we give here will in a way modify the speed term in the standard level sets equation of motion. To do so we build a potential based on the distance and the gradient of the image we study. In turn the potential gives rise to the force field: F~F(x, y) = P ∀(p,q)∈I ((x, y) - (p, q)) |ÔêçI(p,q)| |(x,y)-(p,q)| 2 . The direction and intensity of the force field at each point will determine the direction of the contour-s evolution. The images we used to test our method were produced by the Univesit'e de Sherbrooke-s PET scanners.

Keywords: PET, Cardiac, Heart, Mouse, Geodesic, Geometric, Level Sets, Deformable Models, Edge Detection, Segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
4273 Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement

Authors: Ferinar Moaidi, Mahdi Moaidi

Abstract:

Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.

Keywords: Distributed generation, genetic algorithm, microgrid, load modelling, loss reduction, voltage improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
4272 Optimization of Machining Parametric Study on Electrical Discharge Machining

Authors: Rakesh Prajapati, Purvik Patel, Hardik Patel

Abstract:

Productivity and quality are two important aspects that have become great concerns in today’s competitive global market. Every production/manufacturing unit mainly focuses on these areas in relation to the process, as well as the product developed. The electrical discharge machining (EDM) process, even now it is an experience process, wherein the selected parameters are still often far from the maximum, and at the same time selecting optimization parameters is costly and time consuming. Material Removal Rate (MRR) during the process has been considered as a productivity estimate with the aim to maximize it, with an intention of minimizing surface roughness taken as most important output parameter. These two opposites in nature requirements have been simultaneously satisfied by selecting an optimal process environment (optimal parameter setting). Objective function is obtained by Regression Analysis and Analysis of Variance. Then objective function is optimized using Genetic Algorithm technique. The model is shown to be effective; MRR and Surface Roughness improved using optimized machining parameters.

Keywords: Material removal rate, TWR, OC, DOE, ANOVA, MINITAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 820
4271 Optimal Design of Multimachine Power System Stabilizers Using Improved Multi-Objective Particle Swarm Optimization Algorithm

Authors: Badr M. Alshammari, T. Guesmi

Abstract:

In this paper, the concept of a non-dominated sorting multi-objective particle swarm optimization with local search (NSPSO-LS) is presented for the optimal design of multimachine power system stabilizers (PSSs). The controller design is formulated as an optimization problem in order to shift the system electromechanical modes in a pre-specified region in the s-plan. A composite set of objective functions comprising the damping factor and the damping ratio of the undamped and lightly damped electromechanical modes is considered. The performance of the proposed optimization algorithm is verified for the 3-machine 9-bus system. Simulation results based on eigenvalue analysis and nonlinear time-domain simulation show the potential and superiority of the NSPSO-LS algorithm in tuning PSSs over a wide range of loading conditions and large disturbance compared to the classic PSO technique and genetic algorithms.

Keywords: Multi-objective optimization, particle swarm optimization, power system stabilizer, low frequency oscillations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
4270 A Comparative Study of Fine Grained Security Techniques Based on Data Accessibility and Inference

Authors: Azhar Rauf, Sareer Badshah, Shah Khusro

Abstract:

This paper analyzes different techniques of the fine grained security of relational databases for the two variables-data accessibility and inference. Data accessibility measures the amount of data available to the users after applying a security technique on a table. Inference is the proportion of information leakage after suppressing a cell containing secret data. A row containing a secret cell which is suppressed can become a security threat if an intruder generates useful information from the related visible information of the same row. This paper measures data accessibility and inference associated with row, cell, and column level security techniques. Cell level security offers greatest data accessibility as it suppresses secret data only. But on the other hand, there is a high probability of inference in cell level security. Row and column level security techniques have least data accessibility and inference. This paper introduces cell plus innocent security technique that utilizes the cell level security method but suppresses some innocent data to dodge an intruder that a suppressed cell may not necessarily contain secret data. Four variations of the technique namely cell plus innocent 1/4, cell plus innocent 2/4, cell plus innocent 3/4, and cell plus innocent 4/4 respectively have been introduced to suppress innocent data equal to 1/4, 2/4, 3/4, and 4/4 percent of the true secret data inside the database. Results show that the new technique offers better control over data accessibility and inference as compared to the state-of-theart security techniques. This paper further discusses the combination of techniques together to be used. The paper shows that cell plus innocent 1/4, 2/4, and 3/4 techniques can be used as a replacement for the cell level security.

Keywords: Fine Grained Security, Data Accessibility, Inference, Row, Cell, Column Level Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
4269 Architectural Acoustic Modeling for Predicting Reverberation Time in Room Acoustic Design Using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents architectural acoustic modeling to estimate reverberation time in room acoustic design using multiple criteria decision making analysis. First, fundamental decision criteria were determined to evaluate the reverberation time in the room acoustic design problem. Then, the proposed model was applied to a practical decision problem to evaluate and select the optimal room acoustic design model. Finally, the optimal acoustic design of the rooms was analyzed and ranked using a multiple criteria decision making analysis method.

Keywords: Architectural acoustics, room acoustics, architectural acoustic modeling, reverberation time, room acoustic design, multiple criteria decision making analysis, decision analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507
4268 Robust Camera Calibration using Discrete Optimization

Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck

Abstract:

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1792
4267 Artificial Neural Network Approach for Inventory Management Problem

Authors: Govind Shay Sharma, Randhir Singh Baghel

Abstract:

The stock management of raw materials and finished goods is a significant issue for industries in fulfilling customer demand. Optimization of inventory strategies is crucial to enhancing customer service, reducing lead times and costs, and meeting market demand. This paper suggests finding an approach to predict the optimum stock level by utilizing past stocks and forecasting the required quantities. In this paper, we utilized Artificial Neural Network (ANN) to determine the optimal value. The objective of this paper is to discuss the optimized ANN that can find the best solution for the inventory model. In the context of the paper, we mentioned that the k-means algorithm is employed to create homogeneous groups of items. These groups likely exhibit similar characteristics or attributes that make them suitable for being managed using uniform inventory control policies. The paper proposes a method that uses the neural fit algorithm to control the cost of inventory.

Keywords: Artificial Neural Network, inventory management, optimization, distributor center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136
4266 Stability of Electrical Motor Supplied by a Five Level Inverter

Authors: Kelaiaia Mounia Samira, Labar Hocine, Bounaya Kamel, Kelaiaia Samia

Abstract:

The development of the power electronics has allowed increasing the precision and reliability of the electrical trainings, thanks to the adjustable inverters, as the Pulse Wide Modulation (PWM) five level inverters, which is the object of study in this article.The authors treat the relation between the law order adopted for a given system and the oscillations of the electrical and mechanical parameters of which the tolerance depends on the process with which they are integrated (paper factory, lifting of the heavy loads, etc.).Thus the best choice of the regulation indexes allows us to achieve stability and safety training without investment (management of existing equipment).

Keywords: multi level inverter, PWM, Harmonics, oscillation, control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1636
4265 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 427
4264 Discrete Wavelet Transform Decomposition Level Determination Exploiting Sparseness Measurement

Authors: Lei Lei, Chao Wang, Xin Liu

Abstract:

Discrete wavelet transform (DWT) has been widely adopted in biomedical signal processing for denoising, compression and so on. Choosing a suitable decomposition level (DL) in DWT is of paramount importance to its performance. In this paper, we propose to exploit sparseness of the transformed signals to determine the appropriate DL. Simulation results have shown that the sparseness of transformed signals after DWT increases with the increasing DLs. Additional Monte-Carlo simulation results have verified the effectiveness of sparseness measure in determining the DL.

Keywords: Sparseness, DWT, decomposition level, ECG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5851
4263 Optimal Transmission Network Usage and Loss Allocation Using Matrices Methodology and Cooperative Game Theory

Authors: Baseem Khan, Ganga Agnihotri

Abstract:

Restructuring of Electricity supply industry introduced many issues such as transmission pricing, transmission loss allocation and congestion management. Many methodologies and algorithms were proposed for addressing these issues. In this paper a power flow tracing based method is proposed which involves Matrices methodology for the transmission usage and loss allocation for generators and demands. This method provides loss allocation in a direct way because all the computation is previously done for usage allocation. The proposed method is simple and easy to implement in a large power system. Further it is less computational because it requires matrix inversion only a single time. After usage and loss allocation cooperative game theory is applied to results for finding efficient economic signals. Nucleolus and Shapely value approach is used for optimal allocation of results. Results are shown for the IEEE 6 bus system and IEEE 14 bus system.

Keywords: Modified Kirchhoff Matrix, Power flow tracing, Transmission Pricing, Transmission Loss Allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2576
4262 Performance Analysis of OQSMS and MDDR Scheduling Algorithms for IQ Switches

Authors: K. Navaz, Kannan Balasubramanian

Abstract:

Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.

Keywords: Multicast, Switch, Delay, Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1150
4261 Analysis of Surface Hardness, Surface Roughness, and Near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer. 

Keywords: Surface hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
4260 H∞ Fuzzy Integral Power Control for DFIG Wind Energy System

Authors: N. Chayaopas, W. Assawinchaichote

Abstract:

In order to maximize energy capturing from wind energy, controlling the doubly fed induction generator to have optimal power from the wind, generator speed and output electrical power control in wind energy system have a great importance due to the nonlinear behavior of wind velocities. In this paper purposes the design of a control scheme is developed for power control of wind energy system via H∞ fuzzy integral controller. Firstly, the nonlinear system is represented in term of a TS fuzzy control design via linear matrix inequality approach to find the optimal controller to have an H∞ performance are derived. The proposed control method extract the maximum energy from the wind and overcome the nonlinearity and disturbances problems of wind energy system which give good tracking performance and high efficiency power output of the DFIG.

Keywords: H∞ fuzzy integral control, linear matrix inequality, wind energy system, doubly fed induction generator (DFIG).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
4259 A New Approach to Image Segmentation via Fuzzification of Rènyi Entropy of Generalized Distributions

Authors: Samy Sadek, Ayoub Al-Hamadi, Axel Panning, Bernd Michaelis, Usama Sayed

Abstract:

In this paper, we propose a novel approach for image segmentation via fuzzification of Rènyi Entropy of Generalized Distributions (REGD). The fuzzy REGD is used to precisely measure the structural information of image and to locate the optimal threshold desired by segmentation. The proposed approach draws upon the postulation that the optimal threshold concurs with maximum information content of the distribution. The contributions in the paper are as follow: Initially, the fuzzy REGD as a measure of the spatial structure of image is introduced. Then, we propose an efficient entropic segmentation approach using fuzzy REGD. However the proposed approach belongs to entropic segmentation approaches (i.e. these approaches are commonly applied to grayscale images), it is adapted to be viable for segmenting color images. Lastly, diverse experiments on real images that show the superior performance of the proposed method are carried out.

Keywords: Entropy of generalized distributions, entropy fuzzification, entropic image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3208
4258 Forecasting Unemployment Rate in Selected European Countries Using Smoothing Methods

Authors: Ksenija Dumičić, Anita Čeh Časni, Berislav Žmuk

Abstract:

The aim of this paper is to select the most accurate forecasting method for predicting the future values of the unemployment rate in selected European countries. In order to do so, several forecasting techniques adequate for forecasting time series with trend component, were selected, namely: double exponential smoothing (also known as Holt`s method) and Holt-Winters` method which accounts for trend and seasonality. The results of the empirical analysis showed that the optimal model for forecasting unemployment rate in Greece was Holt-Winters` additive method. In the case of Spain, according to MAPE, the optimal model was double exponential smoothing model. Furthermore, for Croatia and Italy the best forecasting model for unemployment rate was Holt-Winters` multiplicative model, whereas in the case of Portugal the best model to forecast unemployment rate was Double exponential smoothing model. Our findings are in line with European Commission unemployment rate estimates.

Keywords: European Union countries, exponential smoothing methods, forecast accuracy unemployment rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3764
4257 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786
4256 Exploiting Two Intelligent Models to Predict Water Level: A Field Study of Urmia Lake, Iran

Authors: Shahab Kavehkar, Mohammad Ali Ghorbani, Valeriy Khokhlov, Afshin Ashrafzadeh, Sabereh Darbandi

Abstract:

Water level forecasting using records of past time series is of importance in water resources engineering and management. For example, water level affects groundwater tables in low-lying coastal areas, as well as hydrological regimes of some coastal rivers. Then, a reliable prediction of sea-level variations is required in coastal engineering and hydrologic studies. During the past two decades, the approaches based on the Genetic Programming (GP) and Artificial Neural Networks (ANN) were developed. In the present study, the GP is used to forecast daily water level variations for a set of time intervals using observed water levels. The measurements from a single tide gauge at Urmia Lake, Northwest Iran, were used to train and validate the GP approach for the period from January 1997 to July 2008. Statistics, the root mean square error and correlation coefficient, are used to verify model by comparing with a corresponding outputs from Artificial Neural Network model. The results show that both these artificial intelligence methodologies are satisfactory and can be considered as alternatives to the conventional harmonic analysis.

Keywords: Water-Level variation, forecasting, artificial neural networks, genetic programming, comparative analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2318
4255 Optimum Parameter of a Viscous Damper for Seismic and Wind Vibration

Authors: Soltani Amir, Hu Jiaxin

Abstract:

Determination of optimal parameters of a passive  control system device is the primary objective of this study.  Expanding upon the use of control devices in wind and earthquake  hazard reduction has led to development of various control systems.  The advantage of non-linearity characteristics in a passive control  device and the optimal control method using LQR algorithm are  explained in this study. Finally, this paper introduces a simple  approach to determine optimum parameters of a nonlinear viscous  damper for vibration control of structures. A MATLAB program is  used to produce the dynamic motion of the structure considering the  stiffness matrix of the SDOF frame and the non-linear damping  effect. This study concluded that the proposed system (variable  damping system) has better performance in system response control  than a linear damping system. Also, according to the energy  dissipation graph, the total energy loss is greater in non-linear  damping system than other systems.

 

Keywords: Passive Control System, Damping Devices, Viscous Dampers, Control Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3575
4254 Model Development for Allocation of Raw Material in Timber Processing Industry in Indonesia

Authors: Muh. Hisjam, Nancy Oktyajati, Wakhid A. Jauhari, Wahyudi Sutopo

Abstract:

This research is intended to develop a raw material allocation model in timber processing industry in Perum Perhutani Unit I, Central Java, Indonesia. The model can be used to determine the quantity of allocation of timber between chain in the supply chain to select supplier considering factors that are log price and the distance. In determining the quantity of allocation of timber between chains in the supply chain, the model considers the optimal inventory in each chain. Whilst the optimal inventory is determined based on demand forecast, the capacity and safety stock. Problem solving allocation is conducted by developing linear programming model that aims to minimize the total cost of the purchase, transportation cost and storage costs at each chain. The results of numerical examples show that the proposed model can generate savings of the purchase cost of 20.84% and select suppliers with mileage closer.

Keywords: Allocation model, linear programming, purchase costs, storage costs, suppliers, transportation costs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
4253 Proposal of a Model Supporting Decision-Making Based On Multi-Objective Optimization Analysis on Information Security Risk Treatment

Authors: Ritsuko Kawasaki (Aiba), Takeshi Hiromatsu

Abstract:

Management is required to understand all information security risks within an organization, and to make decisions on which information security risks should be treated in what level by allocating how much amount of cost. However, such decision-making is not usually easy, because various measures for risk treatment must be selected with the suitable application levels. In addition, some measures may have objectives conflicting with each other. It also makes the selection difficult. Moreover, risks generally have trends and it also should be considered in risk treatment. Therefore, this paper provides the extension of the model proposed in the previous study. The original model supports the selection of measures by applying a combination of weighted average method and goal programming method for multi-objective analysis to find an optimal solution. The extended model includes the notion of weights to the risks, and the larger weight means the priority of the risk.

Keywords: Information security risk treatment, Selection of risk measures, Risk acceptanceand Multi-objective optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1705
4252 Level of Concentration in Banking Markets and Length of EU Membership

Authors: Ivan Pavic, Fran Galetic, Tomislava Pavic Kramaric

Abstract:

The purpose of this article is to analyze the degree of concentration in the banking market in EU member states as well as to determine the impact of the length of EU membership on the degree of concentration. In that sense several analysis were conducted, specifically, panel analysis, calculation of correlation coefficient and regression analysis of the impact of the length of EU membership on the degree of concentration. Panel analysis was conducted to determine whether there is a similar trend of concentration in three groups of countries - countries with a low, moderate and high level of concentration. The conducted panel analysis showed that in EU countries with a moderate level of concentration, the level of concentration decreases. The calculation of correlation showed that, to some extent, with other influential factors, the length of EU membership negatively affects the market concentration of the banking market. Using the regression analysis for investigation of the influence of the length of EU membership on the level of concentration in the banking sector in a particular country, the results reveal that there is a negative effect of the length in EU membership on market concentration, although it is not significantly influential variable.

Keywords: Banking sector, concentration, EU

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
4251 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis

Authors: Amir Hajian, Sepehr Damavandinejadmonfared

Abstract:

In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.

Keywords: Biometrics, finger vein recognition, Principal Component Analysis (PCA), Kernel Principal Component Analysis (KPCA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
4250 Blood Glucose Level Measurement from Breath Analysis

Authors: Tayyab Hassan, Talha Rehman, Qasim Abdul Aziz, Ahmad Salman

Abstract:

The constant monitoring of blood glucose level is necessary for maintaining health of patients and to alert medical specialists to take preemptive measures before the onset of any complication as a result of diabetes. The current clinical monitoring of blood glucose uses invasive methods repeatedly which are uncomfortable and may result in infections in diabetic patients. Several attempts have been made to develop non-invasive techniques for blood glucose measurement. In this regard, the existing methods are not reliable and are less accurate. Other approaches claiming high accuracy have not been tested on extended dataset, and thus, results are not statistically significant. It is a well-known fact that acetone concentration in breath has a direct relation with blood glucose level. In this paper, we have developed the first of its kind, reliable and high accuracy breath analyzer for non-invasive blood glucose measurement. The acetone concentration in breath was measured using MQ 138 sensor in the samples collected from local hospitals in Pakistan involving one hundred patients. The blood glucose levels of these patients are determined using conventional invasive clinical method. We propose a linear regression classifier that is trained to map breath acetone level to the collected blood glucose level achieving high accuracy.

Keywords: Blood glucose level, breath acetone concentration, diabetes, linear regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520