Search results for: multi-objective evolutionary algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3836

Search results for: multi-objective evolutionary algorithm

3476 Modern Imputation Technique for Missing Data in Linear Functional Relationship Model

Authors: Adilah Abdul Ghapor, Yong Zulina Zubairi, Rahmatullah Imon

Abstract:

Missing value problem is common in statistics and has been of interest for years. This article considers two modern techniques in handling missing data for linear functional relationship model (LFRM) namely the Expectation-Maximization (EM) algorithm and Expectation-Maximization with Bootstrapping (EMB) algorithm using three performance indicators; namely the mean absolute error (MAE), root mean square error (RMSE) and estimated biased (EB). In this study, we applied the methods of imputing missing values in the LFRM. Results of the simulation study suggest that EMB algorithm performs much better than EM algorithm in both models. We also illustrate the applicability of the approach in a real data set.

Keywords: expectation-maximization, expectation-maximization with bootstrapping, linear functional relationship model, performance indicators

Procedia PDF Downloads 395
3475 Tank Barrel Surface Damage Detection Algorithm

Authors: Tomáš Dyk, Stanislav Procházka, Martin Drahanský

Abstract:

The article proposes a new algorithm for detecting damaged areas of the tank barrel based on the image of the inner surface of the tank barrel. Damage position is calculated using image processing techniques such as edge detection, discrete wavelet transformation and image segmentation for accurate contour detection. The algorithm can detect surface damage in smoothbore and even in rifled tank barrels. The algorithm also calculates the volume of the detected damage from the depth map generated, for example, from the distance measurement unit. The proposed method was tested on data obtained by a tank barrel scanning device, which generates both surface image data and depth map. The article also discusses tank barrel scanning devices and how damaged surface impacts material resistance.

Keywords: barrel, barrel diagnostic, image processing, surface damage detection, tank

Procedia PDF Downloads 135
3474 DCT and Stream Ciphers for Improved Image Encryption Mechanism

Authors: T. R. Sharika, Ashwini Kumar, Kamal Bijlani

Abstract:

Encryption is the process of converting crucial information’s unreadable to unauthorized persons. Image security is an important type of encryption that secures all type of images from cryptanalysis. A stream cipher is a fast symmetric key algorithm which is used to convert plaintext to cipher text. In this paper we are proposing an image encryption algorithm with Discrete Cosine Transform and Stream Ciphers that can improve compression of images and enhanced security. The paper also explains the use of a shuffling algorithm for enhancing securing.

Keywords: decryption, DCT, encryption, RC4 cipher, stream cipher

Procedia PDF Downloads 357
3473 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data

Authors: S. Nickolas, Shobha K.

Abstract:

The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.

Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing

Procedia PDF Downloads 272
3472 Dual-Channel Multi-Band Spectral Subtraction Algorithm Dedicated to a Bilateral Cochlear Implant

Authors: Fathi Kallel, Ahmed Ben Hamida, Christian Berger-Vachon

Abstract:

In this paper, a Speech Enhancement Algorithm based on Multi-Band Spectral Subtraction (MBSS) principle is evaluated for Bilateral Cochlear Implant (BCI) users. Specifically, dual-channel noise power spectral estimation algorithm using Power Spectral Densities (PSD) and Cross Power Spectral Densities (CPSD) of the observed signals is studied. The enhanced speech signal is obtained using Dual-Channel Multi-Band Spectral Subtraction ‘DC-MBSS’ algorithm. For performance evaluation, objective speech assessment test relying on Perceptual Evaluation of Speech Quality (PESQ) score is performed to fix the optimal number of frequency bands needed in DC-MBSS algorithm. In order to evaluate the speech intelligibility, subjective listening tests are assessed with 3 deafened BCI patients. Experimental results obtained using French Lafon database corrupted by an additive babble noise at different Signal-to-Noise Ratios (SNR) showed that DC-MBSS algorithm improves speech understanding for single and multiple interfering noise sources.

Keywords: speech enhancement, spectral substracion, noise estimation, cochlear impalnt

Procedia PDF Downloads 544
3471 Recognition of Tifinagh Characters with Missing Parts Using Neural Network

Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui

Abstract:

In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.

Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN

Procedia PDF Downloads 329
3470 Reliability Improvement of Power System Networks Using Adaptive Genetic Algorithm

Authors: Alireza Alesaadi

Abstract:

Reliability analysis is a powerful method for determining the weak points of the electrical networks. In designing of electrical network, it is tried to design the most reliable network with minimal system shutting down, but it is usually associated with increasing the cost. In this paper, using adaptive genetic algorithm, a method was presented that provides the most reliable system with a certain economical cost. Finally, the proposed method is applied to a sample network and results will be analyzed.

Keywords: reliability, adaptive genetic algorithm, electrical network, communication engineering

Procedia PDF Downloads 500
3469 Design Data Sorter Circuit Using Insertion Sorting Algorithm

Authors: Hoda Abugharsa

Abstract:

In this paper we propose to design a sorter circuit using insertion sorting algorithm. The circuit will be designed using Algorithmic State Machines (ASM) method. That means converting the insertion sorting flowchart into an ASM chart. Then the ASM chart will be used to design the sorter circuit and the control unit.

Keywords: insert sorting algorithm, ASM chart, sorter circuit, state machine, control unit

Procedia PDF Downloads 441
3468 Product Development in Company

Authors: Giorgi Methodishvili, Iuliia Methodishvili

Abstract:

In this paper product development algorithm is used to determine the optimal management of financial resources in company. Aspects of financial management considered include put initial investment, examine all possible ways to solve the problem and the optimal rotation length of profit. The software of given problems is based using greedy algorithm. The obtained model and program maintenance enable us to define the optimal version of management of proper financial flows by using visual diagram on each level of investment.

Keywords: management, software, optimal, greedy algorithm, graph-diagram

Procedia PDF Downloads 52
3467 Automatic Classification Using Dynamic Fuzzy C Means Algorithm and Mathematical Morphology: Application in 3D MRI Image

Authors: Abdelkhalek Bakkari

Abstract:

Image segmentation is a critical step in image processing and pattern recognition. In this paper, we proposed a new robust automatic image classification based on a dynamic fuzzy c-means algorithm and mathematical morphology. The proposed segmentation algorithm (DFCM_MM) has been applied to MR perfusion images. The obtained results show the validity and robustness of the proposed approach.

Keywords: segmentation, classification, dynamic, fuzzy c-means, MR image

Procedia PDF Downloads 472
3466 Autonomous Control of Ultrasonic Transducer Drive System

Authors: Dong-Keun Jeong, Jong-Hyun Kim, Woon-Ha Yoon, Hee-Je Kim

Abstract:

In order to automatically operate the ultrasonic transducer drive system for sonicating aluminum, this paper proposes the ultrasonic transducer sensorless control algorithm. The resonance frequency shift and electrical impedance change is a common phenomenon in the state of the ultrasonic transducer. The proposed control algorithm make use of the impedance change of ultrasonic transducer according to the environment between air state and aluminum alloy state, it controls the ultrasonic transducer drive system autonomous without a sensor. The proposed sensorless autonomous ultrasonic transducer control algorithm was experimentally verified using a 3kW prototype ultrasonic transducer drive system.

Keywords: ultrasonic transducer drive system, impedance change, sensorless, autonomous control algorithm

Procedia PDF Downloads 358
3465 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 162
3464 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks

Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.

Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions

Procedia PDF Downloads 77
3463 Detect Circles in Image: Using Statistical Image Analysis

Authors: Fathi M. O. Hamed, Salma F. Elkofhaifee

Abstract:

The aim of this work is to detect geometrical shape objects in an image. In this paper, the object is considered to be as a circle shape. The identification requires find three characteristics, which are number, size, and location of the object. To achieve the goal of this work, this paper presents an algorithm that combines from some of statistical approaches and image analysis techniques. This algorithm has been implemented to arrive at the major objectives in this paper. The algorithm has been evaluated by using simulated data, and yields good results, and then it has been applied to real data.

Keywords: image processing, median filter, projection, scale-space, segmentation, threshold

Procedia PDF Downloads 427
3462 Using Self Organizing Feature Maps for Automatic Prostate Segmentation in TRUS Images

Authors: Ahad Salimi, Hassan Masoumi

Abstract:

Prostate cancer is one of the most common recognized cancers in men, and, is one of the most important mortality factors of cancer in this group. Determining of prostate’s boundary in TRUS (Transrectal Ultra Sound) images is very necessary for prostate cancer treatments. The weakness edges and speckle noise make the ultrasound images inherently to segment. In this paper a new automatic algorithm for prostate segmentation in TRUS images proposed that include three main stages. At first morphological smoothing and sticks filtering are used for noise removing. In second step, for finding a point in prostate region, SOFM algorithm is enlisted and in the last step, the boundary of prostate extracting accompanying active contour is employed. For validation of proposed method, a number of experiments are conducted. The results obtained by our algorithm show the promise of the proposed algorithm.

Keywords: SOFM, preprocessing, GVF contour, segmentation

Procedia PDF Downloads 323
3461 One-Dimension Model for Positive Displacement Pump with Cavitation Algorithm

Authors: Francesco Rizzuto, Matthew Stickland, Stephan Hannot

Abstract:

The simulation of a positive displacement pump system with commercial software for Computer Fluid Dynamics (CFD), will result in an enormous computational effort due to the complexity of the pump system. This drawback restricts the use of it to a specific part of the pump in one simulation. This research focuses on developing an algorithm that provides a suitable result in agreement with experiment data, without that computational effort. The compressible equations are solved with an explicit algorithm. A comparison is presented between the FV method with Monotonic Upwind scheme for Conservative Laws (MUSCL) with slope limiter and experimental results. The source term for cavitation and friction is introduced into the algorithm with a slipping strategy and solved with a 4th order Runge-Kutta scheme (RK4). Different pumps are modeled and analyzed to evaluate the flexibility of the code. The simulation required minimal computation time and resources without compromising the accuracy of the simulation results. Therefore, this algorithm highlights the feasibility of pressure pulsation simulation as a design tool for an industrial purpose.

Keywords: cavitation, diaphragm, DVCM, finite volume, MUSCL, positive displacement pump

Procedia PDF Downloads 151
3460 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm

Authors: Ping Bo, Meng Yunshan

Abstract:

Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.

Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter

Procedia PDF Downloads 321
3459 Multi-Level Priority Based Task Scheduling Algorithm for Workflows in Cloud Environment

Authors: Anju Bala, Inderveer Chana

Abstract:

Task scheduling is the key concern for the execution of performance-driven workflow applications. As efficient scheduling can have major impact on the performance of the system, task scheduling is often chosen for assigning the request to resources in an efficient way based on cloud resource characteristics. In this paper, priority based task scheduling algorithm has been proposed that prioritizes the tasks based on the length of the instructions. The proposed scheduling approach prioritize the tasks of Cloud applications according to the limits set by six sigma control charts based on dynamic threshold values. Further, the proposed algorithm has been validated through the CloudSim toolkit. The experimental results demonstrate that the proposed algorithm is effective for handling multiple task lists from workflows and in considerably reducing Makespan and Execution time.

Keywords: cloud computing, priority based scheduling, task scheduling, VM allocation

Procedia PDF Downloads 514
3458 A Coordinate-Based Heuristic Route Search Algorithm for Delivery Truck Routing Problem

Authors: Ahmed Tarek, Ahmed Alveed

Abstract:

Vehicle routing problem is a well-known re-search avenue in computing. Modern vehicle routing is more focused with the GPS-based coordinate system, as the state-of-the-art vehicle, and trucking systems are equipped with digital navigation. In this paper, a new two dimensional coordinate-based algorithm for addressing the vehicle routing problem for a supply chain network is proposed and explored, and the algorithm is compared with other available, and recently devised heuristics. For the algorithms discussed, which includes the pro-posed coordinate-based search heuristic as well, the advantages and the disadvantages associated with the heuristics are explored. The proposed algorithm is studied from the stand point of a small supermarket chain delivery network that supplies to its stores in four different states around the East Coast area, and is trying to optimize its trucking delivery cost. Minimizing the delivery cost for the supply network of a supermarket chain is important to ensure its business success.

Keywords: coordinate-based optimal routing, Hamiltonian Circuit, heuristic algorithm, traveling salesman problem, vehicle routing problem

Procedia PDF Downloads 145
3457 A Weighted K-Medoids Clustering Algorithm for Effective Stability in Vehicular Ad Hoc Networks

Authors: Rejab Hajlaoui, Tarek Moulahi, Hervé Guyennet

Abstract:

In a highway scenario, the vehicle speed can exceed 120 kmph. Therefore, any vehicle can enter or leave the network within a very short time. This mobility adversely affects the network connectivity and decreases the life time of all established links. To ensure an effective stability in vehicular ad hoc networks with minimum broadcasting storm, we have developed a weighted algorithm based on the k-medoids clustering algorithm (WKCA). Indeed, the number of clusters and the initial cluster heads will not be selected randomly as usual, but considering the available transmission range and the environment size. Then, to ensure optimal assignment of nodes to clusters in both k-medoids phases, the combined weight of any node will be computed according to additional metrics including direction, relative speed and proximity. Empirical results prove that in addition to the convergence speed that characterizes the k-medoids algorithm, our proposed model performs well both AODV-Clustering and OLSR-Clustering protocols under different densities and velocities in term of end-to-end delay, packet delivery ratio, and throughput.

Keywords: communication, clustering algorithm, k-medoids, sensor, vehicular ad hoc network

Procedia PDF Downloads 235
3456 Sweepline Algorithm for Voronoi Diagram of Polygonal Sites

Authors: Dmitry A. Koptelov, Leonid M. Mestetskiy

Abstract:

Voronoi Diagram (VD) of finite set of disjoint simple polygons, called sites, is a partition of plane into loci (for each site at the locus) – regions, consisting of points that are closer to a given site than to all other. Set of polygons is a universal model for many applications in engineering, geoinformatics, design, computer vision, and graphics. VD of polygons construction usually done with a reduction to task of constructing VD of segments, for which there are effective O(n log n) algorithms for n segments. Preprocessing – constructing segments from polygons’ sides, and postprocessing – polygon’s loci construction by merging the loci of the sides of each polygon are also included in reduction. This approach doesn’t take into account two specific properties of the resulting segment sites. Firstly, all this segments are connected in pairs in the vertices of the polygons. Secondly, on the one side of each segment lies the interior of the polygon. The polygon is obviously included in its locus. Using this properties in the algorithm for VD construction is a resource to reduce computations. The article proposes an algorithm for the direct construction of VD of polygonal sites. Algorithm is based on sweepline paradigm, allowing to effectively take into account these properties. The solution is performed based on reduction. Preprocessing is the constructing of set of sites from vertices and edges of polygons. Each site has an orientation such that the interior of the polygon lies to the left of it. Proposed algorithm constructs VD for set of oriented sites with sweepline paradigm. Postprocessing is a selecting of edges of this VD formed by the centers of empty circles touching different polygons. Improving the efficiency of the proposed sweepline algorithm in comparison with the general Fortune algorithm is achieved due to the following fundamental solutions: 1. Algorithm constructs only such VD edges, which are on the outside of polygons. Concept of oriented sites allowed to avoid construction of VD edges located inside the polygons. 2. The list of events in sweepline algorithm has a special property: the majority of events are connected with “medium” polygon vertices, where one incident polygon side lies behind the sweepline and the other in front of it. The proposed algorithm processes such events in constant time and not in logarithmic time, as in the general Fortune algorithm. The proposed algorithm is fully implemented and tested on a large number of examples. The high reliability and efficiency of the algorithm is also confirmed by computational experiments with complex sets of several thousand polygons. It should be noted that, despite the considerable time that has passed since the publication of Fortune's algorithm in 1986, a full-scale implementation of this algorithm for an arbitrary set of segment sites has not been made. The proposed algorithm fills this gap for an important special case - a set of sites formed by polygons.

Keywords: voronoi diagram, sweepline, polygon sites, fortunes' algorithm, segment sites

Procedia PDF Downloads 173
3455 Real Time Video Based Smoke Detection Using Double Optical Flow Estimation

Authors: Anton Stadler, Thorsten Ike

Abstract:

In this paper, we present a video based smoke detection algorithm based on TVL1 optical flow estimation. The main part of the algorithm is an accumulating system for motion angles and upward motion speed of the flow field. We optimized the usage of TVL1 flow estimation for the detection of smoke with very low smoke density. Therefore, we use adapted flow parameters and estimate the flow field on difference images. We show in theory and in evaluation that this improves the performance of smoke detection significantly. We evaluate the smoke algorithm using videos with different smoke densities and different backgrounds. We show that smoke detection is very reliable in varying scenarios. Further we verify that our algorithm is very robust towards crowded scenes disturbance videos.

Keywords: low density, optical flow, upward smoke motion, video based smoke detection

Procedia PDF Downloads 350
3454 Anomaly Detection Based Fuzzy K-Mode Clustering for Categorical Data

Authors: Murat Yazici

Abstract:

Anomalies are irregularities found in data that do not adhere to a well-defined standard of normal behavior. The identification of outliers or anomalies in data has been a subject of study within the statistics field since the 1800s. Over time, a variety of anomaly detection techniques have been developed in several research communities. The cluster analysis can be used to detect anomalies. It is the process of associating data with clusters that are as similar as possible while dissimilar clusters are associated with each other. Many of the traditional cluster algorithms have limitations in dealing with data sets containing categorical properties. To detect anomalies in categorical data, fuzzy clustering approach can be used with its advantages. The fuzzy k-Mode (FKM) clustering algorithm, which is one of the fuzzy clustering approaches, by extension to the k-means algorithm, is reported for clustering datasets with categorical values. It is a form of clustering: each point can be associated with more than one cluster. In this paper, anomaly detection is performed on two simulated data by using the FKM cluster algorithm. As a significance of the study, the FKM cluster algorithm allows to determine anomalies with their abnormality degree in contrast to numerous anomaly detection algorithms. According to the results, the FKM cluster algorithm illustrated good performance in the anomaly detection of data, including both one anomaly and more than one anomaly.

Keywords: fuzzy k-mode clustering, anomaly detection, noise, categorical data

Procedia PDF Downloads 48
3453 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: gradient image, segmentation and extract, mean-shift algorithm, dictionary iearning

Procedia PDF Downloads 260
3452 Seismic Retrofitting of Structures Using Steel Plate Slit Dampers Based on Genetic Algorithm

Authors: Mohamed Noureldin, Jinkoo Kim

Abstract:

In this study, a genetic algorithm was used to find out the optimum locations of the slit dampers satisfying a target displacement. A seismic retrofit scheme for a building structure was presented using steel plate slit dampers. A cyclic loading test was used to verify the energy dissipation capacity of the slit damper. The seismic retrofit of the model structure using the slit dampers was compared with the retrofit with enlarging shear walls. The capacity spectrum method was used to propose a simple damper distribution scheme proportional to the inter-story drifts. The validity of the simple story-wise damper distribution procedure was verified by comparing the results of the genetic algorithm. It was observed that the proposed simple damper distribution pattern was in a good agreement with the optimum distribution obtained from the genetic algorithm. Acknowledgment: This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03032809).

Keywords: slit dampers, seismic retrofit, genetic algorithm, optimum design

Procedia PDF Downloads 219
3451 Sequential Covering Algorithm for Nondifferentiable Global Optimization Problem and Applications

Authors: Mohamed Rahal, Djaouida Guetta

Abstract:

In this paper, the one-dimensional unconstrained global optimization problem of continuous functions satifying a Hölder condition is considered. We extend the algorithm of sequential covering SCA for Lipschitz functions to a large class of Hölder functions. The convergence of the method is studied and the algorithm can be applied to systems of nonlinear equations. Finally, some numerical examples are presented and illustrate the efficiency of the present approach.

Keywords: global optimization, Hölder functions, sequential covering method, systems of nonlinear equations

Procedia PDF Downloads 367
3450 Capuchin Monkeys Sharing Their Food at a Cost of Themselves

Authors: Benoît Bucher, Hika Kuroshima, Kazuo Fujita

Abstract:

Although altruism is commonly observed in humans and is considered one of the most important factors in the survival of our species, its cognitive mechanisms and evolutionary roots are yet to be explained. This study is based on the previous findings that bonobos (Pan Paniscus) preferred to share a limited amount of food with others regardless of their relationships with the others. Findings such as this suggest that humans’ propensity for altruistic food-sharing may be shared among apes and may have evolved much longer ago than previously considered. We thus adapted the previous experimental design using tufted capuchins (Cebus apella), New World monkeys separating from humans about 40 million years ago. In order to achieve this, 12 pairs of capuchins (consisting of a benefactors and a partner) were tested in a row of two adjacent cages separated by a swinging door locked by a key (Fig.1). We observed whether the monkeys in possession of food (the benefactors) would allow their partner to enter their cage by unlocking the door between them. Results showed that the monkeys clearly preferred to monopolize the food for themselves, even though they in a few cases unlocked the door after eating the preferred food. This suggests that this species, which has been shown to be sensitive to the others’ welfare, would not actively share food at a cost of their own. Although further studies are needed, our results suggest the existence of significant differences in the evolutionary development of the pro social tendencies between bonobos and capuchin monkeys.

Keywords: altruism, capuchin monkeys, food sharing, pro social behaviors

Procedia PDF Downloads 447
3449 Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel

Authors: Said Elkassimi, Said Safi, B. Manaut

Abstract:

This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.

Keywords: adaptive filtering second equalizer, LMS, RLS Bran A, Proakis (B) MMSE, ZF

Procedia PDF Downloads 309
3448 Sync Consensus Algorithm: Trying to Reach an Agreement at Full Speed

Authors: Yuri Zinchenko

Abstract:

Recently, distributed storage systems have been used more and more in various aspects of everyday life. They provide such necessary properties as Scalability, Fault Tolerance, Durability, and others. At the same time, not only reliable but also fast data storage remains one of the most pressing issues in this area. That brings us to the consensus algorithm as one of the most important components that has a great impact on the functionality of a distributed system. This paper is the result of an analysis of several well-known consensus algorithms, such as Paxos and Raft. The algorithm it offers, called Sync, promotes, but does not insist on simultaneous writing to the nodes (which positively affects the overall writing speed) and tries to minimize the system's inactive time. This allows nodes to reach agreement on the system state in a shorter period, which is a critical factor for distributed systems. Also when developing Sync, a lot of attention was paid to such criteria as simplicity and intuitiveness, the importance of which is difficult to overestimate.

Keywords: sync, consensus algorithm, distributed system, leader-based, synchronization.

Procedia PDF Downloads 57
3447 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 528