Search results for: font distribution algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5128

Search results for: font distribution algorithm

3808 Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution

Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph

Abstract:

In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.

Keywords: Return level, Generalized extreme value (GEV), Meteorology, Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
3807 Induction Motor Efficiency Estimation using Genetic Algorithm

Authors: Khalil Banan, Mohammad B.B. Sharifian, Jafar Mohammadi

Abstract:

Due to the high percentage of induction motors in industrial market, there exist a large opportunity for energy savings. Replacement of working induction motors with more efficient ones can be an important resource for energy savings. A calculation of energy savings and payback periods, as a result of such a replacement, based on nameplate motor efficiency or manufacture-s data can lead to large errors [1]. Efficiency of induction motors (IMs) can be extracted using some procedures that use the no-load test results. In the cases that we must estimate the efficiency on-line, some of these procedures can-t be efficient. In some cases the efficiency estimates using the rating values of the motor, but these procedures can have errors due to the different working condition of the motor. In this paper the efficiency of an IM estimated by using the genetic algorithm. The results are compared with the measured values of the torque and power. The results show smaller errors for this procedure compared with the conventional classical procedures, hence the cost of the equipments is reduced and on-line estimation of the efficiency can be made.

Keywords: Genetic algorithm, induction motor, efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2600
3806 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 673
3805 Application of Reliability Prediction Model Adapted for the Analysis of the ERP System

Authors: F. Urem, K. Fertalj, Ž. Mikulić

Abstract:

This paper presents the possibilities of using Weibull statistical distribution in modeling the distribution of defects in ERP systems. There follows a case study, which examines helpdesk records of defects that were reported as the result of one ERP subsystem upgrade. The result of the applied modeling is in modeling the reliability of the ERP system from a user perspective with estimated parameters like expected maximum number of defects in one day or predicted minimum of defects between two upgrades. Applied measurement-based analysis framework is proved to be suitable in predicting future states of the reliability of the observed ERP subsystems.

Keywords: ERP, reliability, Weibull

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1313
3804 Fault Localization and Alarm Correlation in Optical WDM Networks

Authors: G. Ramesh, S. Sundara Vadivelu

Abstract:

For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.

Keywords: Alarm correlation, blocking probability, delay, fault localization, WDM networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
3803 Temperature Distribution in Friction Stir Welding Using Finite Element Method

Authors: Armansyah, I. P. Almanar, M. Saiful Bahari Shaari, M. Shamil Jaffarullah, Nur’amirah Busu, M. Arif Fadzleen Zainal Abidin, M. Amlie A. Kasim

Abstract:

During welding, the amount of heat present in weld zones determines the quality of weldment produced. Thus, the heat distribution characteristics and its magnitude in weld zones with respect to process variables such as tool pin-shoulder rotational and traveling speed during welding is analyzed using thermal finite element analyses method. For this purpose, transient thermal finite element analyses are performed to model the temperatures distribution and its quantities in weld-zones with respect to process variables such as rotational speed and traveling speed during welding. Commercially available software Altair HyperWork is used to model three-dimensional tool pin-shoulder vs. workpieces and to simulate the friction stir process. The results show that increasing tool rotational speed, at a constant traveling speed, will increase the amount of heat generated in weld-zones. In contrary, increasing traveling speed, at constant tool pin-shoulder rotational speeds, will reduce the amount of heat generated in weld zones.

Keywords: Frictions Stir Welding, Temperature Distribution, Finite Element Method, Altair Hyperwork.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3957
3802 A proposed High-Resolution Time-Frequency Distribution for the Analysis of Multicomponent and Speech Signals

Authors: D. Boutana, B. Barkat , F. Marir

Abstract:

In this paper, we propose a novel time-frequency distribution (TFD) for the analysis of multi-component signals. In particular, we use synthetic as well as real-life speech signals to prove the superiority of the proposed TFD in comparison to some existing ones. In the comparison, we consider the cross-terms suppression and the high energy concentration of the signal around its instantaneous frequency (IF).

Keywords: Cohen's Class, Multicomponent signal, SeparableKernel, Speech signal, Time- frequency resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
3801 Effects of Thread Dimensions of Functionally Graded Dental Implants on Stress Distribution

Authors: Kaman M. O., Celik N.

Abstract:

In this study, stress distributions on dental implants made of functionally graded biomaterials (FGBM) are investigated numerically. The implant body is considered to be subjected to axial compression loads. Numerical problem is assumed to be 2D, and ANSYS commercial software is used for the analysis. The cross section of the implant thread varies as varying the height (H) and the width (t) of the thread. According to thread dimensions of implant and material properties of FGBM, equivalent stress distribution on the implant is determined and presented with contour plots along with the maximum equivalent stress values. As a result, with increasing material gradient parameter (n), the equivalent stress decreases, but the minimum stress distribution increases. Maximum stress values decrease with decreasing implant radius (r). Maximum von Mises stresses increases with decreasing H when t is constant. On the other hand, the stress values are not affected by variation of t in the case of H = constant.

Keywords: Functionally graded biomaterials, dental implant finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3072
3800 Simulation of Roughness Shape and Distribution Effects on Rarefied and Compressible Flows at Slip Flow Regime

Authors: M. Hakak Khadem, S. Hossainpour, M. Shams

Abstract:

A numerical simulation of micro Poiseuille flow has performed for rarefied and compressible flow at slip flow regimes. The wall roughness is simulated in two cases with triangular microelements and random micro peaks distributed on wall surfaces to study the effects of roughness shape and distribution on flow field. Two values of Mach and Knudsen numbers have used to investigate the effects of rarefaction as well as compressibility. The numerical results have also checked with available theoretical and experimental relations and good agreements has achieved. High influence of roughness shape can be seen for both compressible and incompressible rarefied flows. In addition it is found that rarefaction has more significant effect on flow field in microchannels with higher relative roughness. It is also found that compressibility has more significant effects on Poiseuille number when relative roughness increases.

Keywords: Relative roughness, slip flow, Poiseuille number, roughness distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1164
3799 An Algorithm for Preventing the Irregular Operation Modes of the Drive Synchronous Motor Providing the Ore Grinding

Authors: Baghdasaryan Marinka

Abstract:

The current scientific and engineering interest concerning the problems of preventing the emergency manifestations of drive synchronous motors, ensuring the ore grinding technological process has been justified. The analysis of the known works devoted to the abnormal operation modes of synchronous motors and possibilities of protection against them, has shown that their application is inexpedient for preventing the impermissible displays arising in the electrical drive synchronous motors ensuring the ore-grinding process. The main energy and technological factors affecting the technical condition of synchronous motors are evaluated. An algorithm for preventing the irregular operation modes of the electrical drive synchronous motor applied in the ore-grinding technological process has been developed and proposed for further application which gives an opportunity to provide smart solutions, ensuring the safe operation of the drive synchronous motor by a comprehensive consideration of the energy and technological factors.

Keywords: Synchronous motor, abnormal operating mode, electric drive, algorithm, energy factor, technological factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 561
3798 2D-Modeling with Lego Mindstorms

Authors: Miroslav Popelka, Jakub Nožička

Abstract:

The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.

Keywords: LEGO Mindstorms, ultrasonic sensor, Real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3108
3797 Novel Method for Elliptic Curve Multi-Scalar Multiplication

Authors: Raveen R. Goundar, Ken-ichi Shiota, Masahiko Toyonaga

Abstract:

The major building block of most elliptic curve cryptosystems are computation of multi-scalar multiplication. This paper proposes a novel algorithm for simultaneous multi-scalar multiplication, that is by employing addition chains. The previously known methods utilizes double-and-add algorithm with binary representations. In order to accomplish our purpose, an efficient empirical method for finding addition chains for multi-exponents has been proposed.

Keywords: elliptic curve cryptosystems, multi-scalar multiplication, addition chains, Fibonacci sequence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1609
3796 Using Memetic Algorithms for the Solution of Technical Problems

Authors: Ulrike Völlinger, Erik Lehmann, Rainer Stark

Abstract:

The intention of this paper is, to help the user of evolutionary algorithms to adapt them easier to their problem at hand. For a lot of problems in the technical field it is not necessary to reach an optimum solution, but to reach a good solution in time. In many cases the solution is undetermined or there doesn-t exist a method to determine the solution. For these cases an evolutionary algorithm can be useful. This paper intents to give the user rules of thumb with which it is easier to decide if the problem is suitable for an evolutionary algorithm and how to design them.

Keywords: Multi criteria optimization, Memetic algorithms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1406
3795 Robust Semi-Blind Digital Image Watermarking Technique in DT-CWT Domain

Authors: Samira Mabtoul, Elhassan Ibn Elhaj, Driss Aboutajdine

Abstract:

In this paper a new robust digital image watermarking algorithm based on the Complex Wavelet Transform is proposed. This technique embeds different parts of a watermark into different blocks of an image under the complex wavelet domain. To increase security of the method, two chaotic maps are employed, one map is used to determine the blocks of the host image for watermark embedding, and another map is used to encrypt the watermark image. Simulation results are presented to demonstrate the effectiveness of the proposed algorithm.

Keywords: Image watermarking, Chaotic map, DT-CWT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
3794 Application of Neuro-Fuzzy Dynamic Programming to Improve the Reactive Power and Voltage Profile of a Distribution Substation

Authors: M. Tarafdar Haque, S. Najafi

Abstract:

Improving the reactive power and voltage profile of a distribution substation is investigated in this paper. The purpose is to properly determination of the shunt capacitors on/off status and suitable tap changer (TC) position of a substation transformer. In addition, the limitation of secondary bus voltage, the maximum allowable number of switching operation in a day for on load tap changer and on/off status of capacitors are taken into account. To achieve these goals, an artificial neural network (ANN) is designed to provide preliminary scheduling. Input of ANN is active and reactive powers of transformer and its primary and secondary bus voltages. The output of ANN is capacitors on/off status and TC position. The preliminary schedule is further refined by fuzzy dynamic programming in order to reach the final schedule. The operation of proposed method in Q/V improving is compared with the results obtained by operator operation in a distribution substation.

Keywords: Neuro-fuzzy, Dynamic programming, Reactive power, Voltage profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
3793 Simulation of Heat Transfer in the Multi-Layer Door of the Furnace

Authors: U. Prasopchingchana

Abstract:

The temperature distribution and the heat transfer rates through a multi-layer door of a furnace were investigated. The inside of the door was in contact with hot air and the other side of the door was in contact with room air. Radiation heat transfer from the walls of the furnace to the door and the door to the surrounding area was included in the problem. This work is a two dimensional steady state problem. The Churchill and Chu correlation was used to find local convection heat transfer coefficients at the surfaces of the furnace door. The thermophysical properties of air were the functions of the temperatures. Polynomial curve fitting for the fluid properties were carried out. Finite difference method was used to discretize for conduction heat transfer within the furnace door. The Gauss-Seidel Iteration was employed to compute the temperature distribution in the door. The temperature distribution in the horizontal mid plane of the furnace door in a two dimensional problem agrees with the one dimensional problem. The local convection heat transfer coefficients at the inside and outside surfaces of the furnace door are exhibited.

Keywords: Conduction, heat transfer, multi-layer door, natural convection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2096
3792 Attacks Classification in Adaptive Intrusion Detection using Decision Tree

Authors: Dewan Md. Farid, Nouria Harbi, Emna Bahri, Mohammad Zahidur Rahman, Chowdhury Mofizur Rahman

Abstract:

Recently, information security has become a key issue in information technology as the number of computer security breaches are exposed to an increasing number of security threats. A variety of intrusion detection systems (IDS) have been employed for protecting computers and networks from malicious network-based or host-based attacks by using traditional statistical methods to new data mining approaches in last decades. However, today's commercially available intrusion detection systems are signature-based that are not capable of detecting unknown attacks. In this paper, we present a new learning algorithm for anomaly based network intrusion detection system using decision tree algorithm that distinguishes attacks from normal behaviors and identifies different types of intrusions. Experimental results on the KDD99 benchmark network intrusion detection dataset demonstrate that the proposed learning algorithm achieved 98% detection rate (DR) in comparison with other existing methods.

Keywords: Detection rate, decision tree, intrusion detectionsystem, network security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3628
3791 A Simplified Distribution for Nonlinear Seas

Authors: M. A. Tayfun, M. A. Alkhalidi

Abstract:

The exact theoretical expression describing the probability distribution of nonlinear sea-surface elevations derived from the second-order narrowband model has a cumbersome form that requires numerical computations, not well-disposed to theoretical or practical applications. Here, the same narrowband model is reexamined to develop a simpler closed-form approximation suitable for theoretical and practical applications. The salient features of the approximate form are explored, and its relative validity is verified with comparisons to other readily available approximations, and oceanic data.

Keywords: Ocean waves, probability distributions, second-order nonlinearities, skewness coefficient, wave steepness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2095
3790 Series-Parallel Systems Reliability Optimization Using Genetic Algorithm and Statistical Analysis

Authors: Essa Abrahim Abdulgader Saleem, Thien-My Dao

Abstract:

The main objective of this paper is to optimize series-parallel system reliability using Genetic Algorithm (GA) and statistical analysis; considering system reliability constraints which involve the redundant numbers of selected components, total cost, and total weight. To perform this work, firstly the mathematical model which maximizes system reliability subject to maximum system cost and maximum system weight constraints is presented; secondly, a statistical analysis is used to optimize GA parameters, and thirdly GA is used to optimize series-parallel systems reliability. The objective is to determine the strategy choosing the redundancy level for each subsystem to maximize the overall system reliability subject to total cost and total weight constraints. Finally, the series-parallel system case study reliability optimization results are showed, and comparisons with the other previous results are presented to demonstrate the performance of our GA.

Keywords: Genetic algorithm, optimization, reliability, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
3789 Direct Simulation Monte Carlo (DSMC) Algorithm – A Comparison of Mathematica Code with FLUENT 6.2 for Low Knudsen Number

Authors: Nabeel A. Qazi, Absaar ul Jabbar, Khalid Parvez

Abstract:

A code has been developed in Mathematica using Direct Simulation Monte Carlo (DSMC) technique. The code was tested for 2-D air flow around a circular cylinder. Same geometry and flow properties were used in FLUENT 6.2 for comparison. The results obtained from Mathematica simulation indicated significant agreement with FLUENT calculations, hence providing insight into particle nature of fluid flows.

Keywords: DSMC algorithm, non continuum gas flows, Monte Carlo methods

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3420
3788 The Inverse Eigenvalue Problem via Orthogonal Matrices

Authors: A. M. Nazari, B. Sepehrian, M. Jabari

Abstract:

In this paper we study the inverse eigenvalue problem for symmetric special matrices and introduce sufficient conditions for obtaining nonnegative matrices. We get the HROU algorithm from [1] and introduce some extension of this algorithm. If we have some eigenvectors and associated eigenvalues of a matrix, then by this extension we can find the symmetric matrix that its eigenvalue and eigenvectors are given. At last we study the special cases and get some remarkable results.

Keywords: Householder matrix, nonnegative matrix, Inverse eigenvalue problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
3787 A Simplified Adaptive Decision Feedback Equalization Technique for π/4-DQPSK Signals

Authors: V. Prapulla, A. Mitra, R. Bhattacharjee, S. Nandi

Abstract:

We present a simplified equalization technique for a π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated signal in a multipath fading environment. The proposed equalizer is realized as a fractionally spaced adaptive decision feedback equalizer (FS-ADFE), employing exponential step-size least mean square (LMS) algorithm as the adaptation technique. The main advantage of the scheme stems from the usage of exponential step-size LMS algorithm in the equalizer, which achieves similar convergence behavior as that of a recursive least squares (RLS) algorithm with significantly reduced computational complexity. To investigate the finite-precision performance of the proposed equalizer along with the π/4 -DQPSK modem, the entire system is evaluated on a 16-bit fixed point digital signal processor (DSP) environment. The proposed scheme is found to be attractive even for those cases where equalization is to be performed within a restricted number of training samples.

Keywords: Adaptive decision feedback equalizer, Fractionally spaced equalizer, π/4 DQPSK signal, Digital signal processor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5736
3786 Application of Heuristic Integration Ant Colony Optimization in Path Planning

Authors: Zeyu Zhang, Guisheng Yin, Ziying Zhang, Liguo Zhang

Abstract:

This paper mainly studies the path planning method based on ant colony optimization (ACO), and proposes heuristic integration ant colony optimization (HIACO). This paper not only analyzes and optimizes the principle, but also simulates and analyzes the parameters related to the application of HIACO in path planning. Compared with the original algorithm, the improved algorithm optimizes probability formula, tabu table mechanism and updating mechanism, and introduces more reasonable heuristic factors. The optimized HIACO not only draws on the excellent ideas of the original algorithm, but also solves the problems of premature convergence, convergence to the sub optimal solution and improper exploration to some extent. HIACO can be used to achieve better simulation results and achieve the desired optimization. Combined with the probability formula and update formula, several parameters of HIACO are tested. This paper proves the principle of the HIACO and gives the best parameter range in the research of path planning.

Keywords: Ant colony optimization, heuristic integration, path planning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 678
3785 Passenger Flow Characteristics of Seoul Metropolitan Subway Network

Authors: Kang Won Lee, Jung Won Lee

Abstract:

Characterizing the network flow is of fundamental importance to understand the complex dynamics of networks. And passenger flow characteristics of the subway network are very relevant for an effective transportation management in urban cities. In this study, passenger flow of Seoul metropolitan subway network is investigated and characterized through statistical analysis. Traditional betweenness centrality measure considers only topological structure of the network and ignores the transportation factors. This paper proposes a weighted betweenness centrality measure that incorporates monthly passenger flow volume. We apply the proposed measure on the Seoul metropolitan subway network involving 493 stations and 16 lines. Several interesting insights about the network are derived from the new measures. Using Kolmogorov-Smirnov test, we also find out that monthly passenger flow between any two stations follows a power-law distribution and other traffic characteristics such as congestion level and throughflow traffic follow exponential distribution.

Keywords: Betweenness centrality, correlation coefficient, power-law distribution, Korea traffic data base.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1000
3784 Resilience Assessment for Power Distribution Systems

Authors: Berna Eren Tokgoz, Mahdi Safa, Seokyon Hwang

Abstract:

Power distribution systems are essential and crucial infrastructures for the development and maintenance of a sustainable society. These systems are extremely vulnerable to various types of natural and man-made disasters. The assessment of resilience focuses on preparedness and mitigation actions under pre-disaster conditions. It also concentrates on response and recovery actions under post-disaster situations. The aim of this study is to present a methodology to assess the resilience of electric power distribution poles against wind-related events. The proposed methodology can improve the accuracy and rapidity of the evaluation of the conditions and the assessment of the resilience of poles. The methodology provides a metric for the evaluation of the resilience of poles under pre-disaster and post-disaster conditions. The metric was developed using mathematical expressions for physical forces that involve various variables, such as physical dimensions of the pole, the inclination of the pole, and wind speed. A three-dimensional imaging technology (photogrammetry) was used to determine the inclination of poles. Based on expert opinion, the proposed metric was used to define zones to visualize resilience. Visual representation of resilience is helpful for decision makers to prioritize their resources before and after experiencing a wind-related disaster. Multiple electric poles in the City of Beaumont, TX were used in a case study to evaluate the proposed methodology.  

Keywords: Photogrammetry, power distribution systems, resilience metric, system resilience, wind-related disasters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
3783 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2433
3782 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach

Authors: Elias K. Maragos, Petros E. Maravelakis

Abstract:

In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.

Keywords: Data envelopment analysis, Dynamic DEA, Piecewise linear inputs, Piecewise linear outputs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 654
3781 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion

Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein

Abstract:

Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.

Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
3780 Robust Face Recognition using AAM and Gabor Features

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seoungseon Jeon, Jaemin Kim, Seongwon Cho

Abstract:

In this paper, we propose a face recognition algorithm using AAM and Gabor features. Gabor feature vectors which are well known to be robust with respect to small variations of shape, scaling, rotation, distortion, illumination and poses in images are popularly employed for feature vectors for many object detection and recognition algorithms. EBGM, which is prominent among face recognition algorithms employing Gabor feature vectors, requires localization of facial feature points where Gabor feature vectors are extracted. However, localization method employed in EBGM is based on Gabor jet similarity and is sensitive to initial values. Wrong localization of facial feature points affects face recognition rate. AAM is known to be successfully applied to localization of facial feature points. In this paper, we devise a facial feature point localization method which first roughly estimate facial feature points using AAM and refine facial feature points using Gabor jet similarity-based facial feature localization method with initial points set by the rough facial feature points obtained from AAM, and propose a face recognition algorithm using the devised localization method for facial feature localization and Gabor feature vectors. It is observed through experiments that such a cascaded localization method based on both AAM and Gabor jet similarity is more robust than the localization method based on only Gabor jet similarity. Also, it is shown that the proposed face recognition algorithm using this devised localization method and Gabor feature vectors performs better than the conventional face recognition algorithm using Gabor jet similarity-based localization method and Gabor feature vectors like EBGM.

Keywords: Face Recognition, AAM, Gabor features, EBGM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
3779 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.

Keywords: Multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, Importance sampling, approximate posterior distribution, Marginal likelihood evidence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613