Search results for: Optimization Algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3057

Search results for: Optimization Algorithms

1887 Adaptive Non-linear Filtering Technique for Image Restoration

Authors: S. K. Satpathy, S. Panda, K. K. Nagwanshi, S. K. Nayak, C. Ardil

Abstract:

Removing noise from the any processed images is very important. Noise should be removed in such a way that important information of image should be preserved. A decisionbased nonlinear algorithm for elimination of band lines, drop lines, mark, band lost and impulses in images is presented in this paper. The algorithm performs two simultaneous operations, namely, detection of corrupted pixels and evaluation of new pixels for replacing the corrupted pixels. Removal of these artifacts is achieved without damaging edges and details. However, the restricted window size renders median operation less effective whenever noise is excessive in that case the proposed algorithm automatically switches to mean filtering. The performance of the algorithm is analyzed in terms of Mean Square Error [MSE], Peak-Signal-to-Noise Ratio [PSNR], Signal-to-Noise Ratio Improved [SNRI], Percentage Of Noise Attenuated [PONA], and Percentage Of Spoiled Pixels [POSP]. This is compared with standard algorithms already in use and improved performance of the proposed algorithm is presented. The advantage of the proposed algorithm is that a single algorithm can replace several independent algorithms which are required for removal of different artifacts.

Keywords: Filtering, Decision Based Algorithm, noise, imagerestoration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
1886 FPGA Based Parallel Architecture for the Computation of Third-Order Cross Moments

Authors: Syed Manzoor Qasim, Shuja Abbasi, Saleh Alshebeili, Bandar Almashary, Ateeq Ahmad Khan

Abstract:

Higher-order Statistics (HOS), also known as cumulants, cross moments and their frequency domain counterparts, known as poly spectra have emerged as a powerful signal processing tool for the synthesis and analysis of signals and systems. Algorithms used for the computation of cross moments are computationally intensive and require high computational speed for real-time applications. For efficiency and high speed, it is often advantageous to realize computation intensive algorithms in hardware. A promising solution that combines high flexibility together with the speed of a traditional hardware is Field Programmable Gate Array (FPGA). In this paper, we present FPGA-based parallel architecture for the computation of third-order cross moments. The proposed design is coded in Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL) and functionally verified by implementing it on Xilinx Spartan-3 XC3S2000FG900-4 FPGA. Implementation results are presented and it shows that the proposed design can operate at a maximum frequency of 86.618 MHz.

Keywords: Cross moments, Cumulants, FPGA, Hardware Implementation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1735
1885 Optimization Study of Adsorption of Nickel(II) on Bentonite

Authors: B. Medjahed, M. A. Didi, B. Guezzen

Abstract:

This work concerns with the experimental study of the adsorption of the Ni(II) on bentonite. The effects of various parameters such as contact time, stirring rate, initial concentration of Ni(II), masse of clay, initial pH of aqueous solution and temperature on the adsorption yield, were carried out. The study of the effect of the ionic strength on the yield of adsorption was examined by the identification and the quantification of the present chemical species in the aqueous phase containing the metallic ion Ni(II). The adsorbed species were investigated by a calculation program using CHEAQS V. L20.1 in order to determine the relation between the percentages of the adsorbed species and the adsorption yield. The optimization process was carried out using 23 factorial designs. The individual and combined effects of three process parameters, i.e. initial Ni(II) concentration in aqueous solution (2.10−3 and 5.10−3 mol/L), initial pH of the solution (2 and 6.5), and mass of bentonite (0.03 and 0.3 g) on Ni(II) adsorption, were studied.

Keywords: Adsorption, bentonite, factorial design, Nickel(II).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
1884 WiPoD Wireless Positioning System based on 802.11 WLAN Infrastructure

Authors: Haluk Gümüskaya, Hüseyin Hakkoymaz

Abstract:

This paper describes WiPoD (Wireless Position Detector) which is a pure software based location determination and tracking (positioning) system. It uses empirical signal strength measurements from different wireless access points for mobile user positioning. It is designed to determine the location of users having 802.11 enabled mobile devices in an 802.11 WLAN infrastructure and track them in real time. WiPoD is the first main module in our LBS (Location Based Services) framework. We tested K-Nearest Neighbor and Triangulation algorithms to estimate the position of a mobile user. We also give the analysis results of these algorithms for real time operations. In this paper, we propose a supportable, i.e. understandable, maintainable, scalable and portable wireless positioning system architecture for an LBS framework. The WiPoD software has a multithreaded structure and was designed and implemented with paying attention to supportability features and real-time constraints and using object oriented design principles. We also describe the real-time software design issues of a wireless positioning system which will be part of an LBS framework.

Keywords: Indoor location determination and tracking, positioning in Wireless LAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
1883 Production and Remanufacturing of Returned Products in Supply Chain using Modified Genetic Algorithm

Authors: Siva Prasad Darla, C. D. Naiju, K. Annamalai, Y. Upendra Sravan

Abstract:

In recent years, environment regulation forcing manufactures to consider recovery activity of end-of- life products and/or return products for refurbishing, recycling, remanufacturing/repair and disposal in supply chain management. In this paper, a mathematical model is formulated for single product production-inventory system considering remanufacturing/reuse of return products and rate of return products follows a demand like function, dependent on purchasing price and acceptance quality level. It is useful in decision making to determine whether to go for remanufacturing or disposal of returned products along with newly produced products to satisfy a stationary demand. In addition, a modified genetic algorithm approach is proposed, inspired by particle swarm optimization method. Numerical analysis of the case study is carried out to validate the model.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Production, Remanufacturing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710
1882 Application of Turbulence Modeling in Computational Fluid Dynamics for Airfoil Simulations

Authors: Mohammed Bilal

Abstract:

The precise prediction of aerodynamic behavior is necessary for the design and optimization of airfoils for a variety of applications. Turbulence, a phenomenon of complex and irregular flow, significantly affects the aerodynamic properties of airfoils. Therefore, turbulence modeling is essential for accurately predicting the behavior of airfoils in simulations. This study investigates five commonly employed turbulence models: Spalart-Allmaras (SA) model, k-epsilon model, k-omega model, Reynolds Stress Model (RSM), and Large Eddy Simulation (LES) model. The paper includes a comparison of the models' precision, computational expense, and applicability to various flow conditions. The strengths and weaknesses of each model are highlighted, allowing researchers and engineers to make informed decisions regarding simulations of specific airfoils. Unquestionably, the continuous development of turbulence modeling will contribute to further improvements in airfoil design and optimization, which will be advantageous to numerous industries.

Keywords: Computational fluid dynamics, airfoil, turbulence, aircraft.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 281
1881 Meta-Learning for Hierarchical Classification and Applications in Bioinformatics

Authors: Fabio Fabris, Alex A. Freitas

Abstract:

Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for recommending the best hierarchical classification algorithm to a hierarchical classification dataset. This work’s contributions are: 1) proposing an algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.

Keywords: Algorithm recommendation, meta-learning, bioinformatics, hierarchical classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
1880 Economic Evaluations Using Genetic Algorithms to Determine the Territorial Impact Caused by High Speed Railways

Authors: Gianluigi De Mare, Tony Leopoldo Luigi Lenza, Rino Conte

Abstract:

The evolution of technology and construction techniques has enabled the upgrading of transport networks. In particular, the high-speed rail networks allow convoys to peak at above 300 km/h. These structures, however, often significantly impact the surrounding environment. Among the effects of greater importance are the ones provoked by the soundwave connected to train transit. The wave propagation affects the quality of life in areas surrounding the tracks, often for several hundred metres. There are substantial damages to properties (buildings and land), in terms of market depreciation. The present study, integrating expertise in acoustics, computering and evaluation fields, outlines a useful model to select project paths so as to minimize the noise impact and reduce the causes of possible litigation. It also facilitates the rational selection of initiatives to contain the environmental damage to the already existing railway tracks. The research is developed with reference to the Italian regulatory framework (usually more stringent than European and international standards) and refers to a case study concerning the high speed network in Italy.

Keywords: Impact, compensation for financial loss, depreciation of property, railway network design, genetic algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764
1879 Design Application Procedures of 15 Storied 3D Reinforced Concrete Shear Wall-Frame Structure

Authors: H. Nikzad, S. Yoshitomi

Abstract:

This paper presents the design application and reinforcement detailing of 15 storied reinforced concrete shear wall-frame structure based on linear static analysis. Databases are generated for section sizes based on automated structural optimization method utilizing Active-set Algorithm in MATLAB platform. The design constraints of allowable section sizes, capacity criteria and seismic provisions for static loads, combination of gravity and lateral loads are checked and determined based on ASCE 7-10 documents and ACI 318-14 design provision. The result of this study illustrates the efficiency of proposed method, and is expected to provide a useful reference in designing of RC shear wall-frame structures.

Keywords: Structural optimization, linear static analysis, ETABS, MATLAB, RC shear wall-frame structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
1878 Experimental Analysis and Optimization of Process Parameters in Plasma Arc Cutting Machine of EN-45A Material Using Taguchi and ANOVA Method

Authors: Sahil Sharma, Mukesh Gupta, Raj Kumar, N. S Bindra

Abstract:

This paper presents an experimental investigation on the optimization and the effect of the cutting parameters on Material Removal Rate (MRR) in Plasma Arc Cutting (PAC) of EN-45A Material using Taguchi L 16 orthogonal array method. Four process variables viz. cutting speed, current, stand-off-distance and plasma gas pressure have been considered for this experimental work. Analysis of variance (ANOVA) has been performed to get the percentage contribution of each process parameter for the response variable i.e. MRR. Based on ANOVA, it has been observed that the cutting speed, current and the plasma gas pressure are the major influencing factors that affect the response variable. Confirmation test based on optimal setting shows the better agreement with the predicted values.

Keywords: Analysis of variance, Material removal rate, plasma arc cutting, Taguchi method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
1877 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: Dynamic area requirements, facility layout problem, optimization model, product assembly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1050
1876 Dynamic Traffic Simulation for Traffic Congestion Problem Using an Enhanced Algorithm

Authors: Wong Poh Lee, Mohd. Azam Osman, Abdullah Zawawi Talib, Ahmad Izani Md. Ismail

Abstract:

Traffic congestion has become a major problem in many countries. One of the main causes of traffic congestion is due to road merges. Vehicles tend to move slower when they reach the merging point. In this paper, an enhanced algorithm for traffic simulation based on the fluid-dynamic algorithm and kinematic wave theory is proposed. The enhanced algorithm is used to study traffic congestion at a road merge. This paper also describes the development of a dynamic traffic simulation tool which is used as a scenario planning and to forecast traffic congestion level in a certain time based on defined parameter values. The tool incorporates the enhanced algorithm as well as the two original algorithms. Output from the three above mentioned algorithms are measured in terms of traffic queue length, travel time and the total number of vehicles passing through the merging point. This paper also suggests an efficient way of reducing traffic congestion at a road merge by analyzing the traffic queue length and travel time.

Keywords: Dynamic, fluid-dynamic, kinematic wave theory, simulation, traffic congestion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3142
1875 Neural Network Optimal Power Flow(NN-OPF) based on IPSO with Developed Load Cluster Method

Authors: Mat Syai'in, Adi Soeprijanto

Abstract:

An Optimal Power Flow based on Improved Particle Swarm Optimization (OPF-IPSO) with Generator Capability Curve Constraint is used by NN-OPF as a reference to get pattern of generator scheduling. There are three stages in Designing NN-OPF. The first stage is design of OPF-IPSO with generator capability curve constraint. The second stage is clustering load to specific range and calculating its index. The third stage is training NN-OPF using constructive back propagation method. In training process total load and load index used as input, and pattern of generator scheduling used as output. Data used in this paper is power system of Java-Bali. Software used in this simulation is MATLAB.

Keywords: Optimal Power Flow, Generator Capability Curve, Improved Particle Swarm Optimization, Neural Network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
1874 Reduction of Differential Column Shortening in Tall Buildings

Authors: Hansoo Kim, Seunghak Shin

Abstract:

The differential column shortening in tall buildings can be reduced by improving material and structural characteristics of the structural systems. This paper proposes structural methods to reduce differential column shortening in reinforced concrete tall buildings; connecting columns with rigidly jointed horizontal members, using outriggers, and placing additional reinforcement at the columns. The rigidly connected horizontal members including outriggers reduce the differential shortening between adjacent vertical members. The axial stiffness of columns with greater shortening can be effectively increased by placing additional reinforcement at the columns, thus the differential column shortening can be reduced in the design stage. The optimum distribution of additional reinforcement can be determined by applying a gradient based optimization technique.

Keywords: Column shortening, long-term behavior, optimization, tall building.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4011
1873 An Improved C-Means Model for MRI Segmentation

Authors: Ying Shen, Weihua Zhu

Abstract:

Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.

Keywords: Magnetic Resonance Image, C-means model, image segmentation, information entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
1872 Decision Maturity Framework: Introducing Maturity In Heuristic Search

Authors: Ayed Salman, Fawaz Al-Anzi, Aseel Al-Minayes

Abstract:

Heuristics-based search methodologies normally work on searching a problem space of possible solutions toward finding a “satisfactory" solution based on “hints" estimated from the problem-specific knowledge. Research communities use different types of methodologies. Unfortunately, most of the times, these hints are immature and can lead toward hindering these methodologies by a premature convergence. This is due to a decrease of diversity in search space that leads to a total implosion and ultimately fitness stagnation of the population. In this paper, a novel Decision Maturity framework (DMF) is introduced as a solution to this problem. The framework simply improves the decision on the direction of the search by materializing hints enough before using them. Ideas from this framework are injected into the particle swarm optimization methodology. Results were obtained under both static and dynamic environment. The results show that decision maturity prevents premature converges to a high degree.

Keywords: Heuristic Search, hints, Particle Swarm Optimization, Decision Maturity Framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1355
1871 Developing Damage Assessment Model for Bridge Surroundings: A Study of Disaster by Typhoon Morakot in Taiwan

Authors: Jieh-Haur Chen, Pei-Fen Huang

Abstract:

This paper presents an integrated model that automatically measures the change of rivers, damage area of bridge surroundings, and change of vegetation. The proposed model is on the basis of a neurofuzzy mechanism enhanced by SOM optimization algorithm, and also includes three functions to deal with river imagery. High resolution imagery from FORMOSAT-2 satellite taken before and after the invasion period is adopted. By randomly selecting a bridge out of 129 destroyed bridges, the recognition results show that the average width has increased 66%. The ruined segment of the bridge is located exactly at the most scour region. The vegetation coverage has also reduced to nearly 90% of the original. The results yielded from the proposed model demonstrate a pinpoint accuracy rate at 99.94%. This study brings up a successful tool not only for large-scale damage assessment but for precise measurement to disasters.

Keywords: remote sensing image, damage assessment, typhoon disaster, bridge, ANN, fuzzy, SOM, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
1870 Cost Optimization of Concentric Braced Steel Building Structures

Authors: T. Balogh, L. G. Vigh

Abstract:

Seismic design may require non-conventional concept, due to the fact that the stiffness and layout of the structure have a great effect on the overall structural behaviour, on the seismic load intensity as well as on the internal force distribution. To find an economical and optimal structural configuration the key issue is the optimal design of the lateral load resisting system. This paper focuses on the optimal design of regular, concentric braced frame (CBF) multi-storey steel building structures. The optimal configurations are determined by a numerical method using genetic algorithm approach, developed by the authors. Aim is to find structural configurations with minimum structural cost. The design constraints of objective function are assigned in accordance with Eurocode 3 and Eurocode 8 guidelines. In this paper the results are presented for various building geometries, different seismic intensities, and levels of energy dissipation.

Keywords: Dissipative Structures, Genetic Algorithm, Seismic Effects, Structural Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3013
1869 Application of Mutual Information based Least dependent Component Analysis (MILCA) for Removal of Ocular Artifacts from Electroencephalogram

Authors: V Krishnaveni, S Jayaraman, K Ramadoss

Abstract:

The electrical potentials generated during eye movements and blinks are one of the main sources of artifacts in Electroencephalogram (EEG) recording and can propagate much across the scalp, masking and distorting brain signals. In recent times, signal separation algorithms are used widely for removing artifacts from the observed EEG data. In this paper, a recently introduced signal separation algorithm Mutual Information based Least dependent Component Analysis (MILCA) is employed to separate ocular artifacts from EEG. The aim of MILCA is to minimize the Mutual Information (MI) between the independent components (estimated sources) under a pure rotation. Performance of this algorithm is compared with eleven popular algorithms (Infomax, Extended Infomax, Fast ICA, SOBI, TDSEP, JADE, OGWE, MS-ICA, SHIBBS, Kernel-ICA, and RADICAL) for the actual independence and uniqueness of the estimated source components obtained for different sets of EEG data with ocular artifacts by using a reliable MI Estimator. Results show that MILCA is best in separating the ocular artifacts and EEG and is recommended for further analysis.

Keywords: Electroencephalogram, Ocular Artifacts (OA), Independent Component Analysis (ICA), Mutual Information (MI), Mutual Information based Least dependent Component Analysis(MILCA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
1868 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
1867 Fractional Delay FIR Filters Design with Enhanced Differential Evolution

Authors: Krzysztof Walczak

Abstract:

Fractional delay FIR filters design method based on the differential evolution algorithm is presented. Differential evolution is an evolutionary algorithm for solving a global optimization problems in the continuous search space. In the proposed approach, an evolutionary algorithm is used to determine the coefficients of a fractional delay FIR filter based on the Farrow structure. Basic differential evolution is enhanced with a restricted mating technique, which improves the algorithm performance in terms of convergence speed and obtained solution. Evolutionary optimization is carried out by minimizing an objective function which is based on the amplitude response and phase delay errors. Experimental results show that the proposed algorithm leads to a reduction in the amplitude response and phase delay errors relative to those achieved with the Least-Squares method.

Keywords: Fractional Delay Filters, Farrow Structure, Evolutionary Computation, Differential Evolution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
1866 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: Image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1160
1865 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms

Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri

Abstract:

Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.

Keywords: Connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104
1864 Secure Socket Layer in the Network and Web Security

Authors: Roza Dastres, Mohsen Soori

Abstract:

In order to electronically exchange information between network users in the web of data, different software such as outlook is presented. So, the traffic of users on a site or even the floors of a building can be decreased as a result of applying a secure and reliable data sharing software. It is essential to provide a fast, secure and reliable network system in the data sharing webs to create an advanced communication systems in the users of network. In the present research work, different encoding methods and algorithms in data sharing systems is studied in order to increase security of data sharing systems by preventing the access of hackers to the transferred data. To increase security in the networks, the possibility of textual conversation between customers of a local network is studied. Application of the encryption and decryption algorithms is studied in order to increase security in networks by preventing hackers from infiltrating. As a result, a reliable and secure communication system between members of a network can be provided by preventing additional traffic in the website environment in order to increase speed, accuracy and security in the network and web systems of data sharing.

Keywords: Secure Socket Layer, Security of networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
1863 A Cooperative Multi-Robot Control Using Ad Hoc Wireless Network

Authors: Amira Elsonbaty, Rawya Rizk, Mohamed Elksas, Mofreh Salem

Abstract:

In this paper, a Cooperative Multi-robot for Carrying Targets (CMCT) algorithm is proposed. The multi-robot team consists of three robots, one is a supervisor and the others are workers for carrying boxes in a store of 100×100 m2. Each robot has a self recharging mechanism. The CMCT minimizes robot-s worked time for carrying many boxes during day by working in parallel. That is, the supervisor detects the required variables in the same time another robots work with previous variables. It works with straightforward mechanical models by using simple cosine laws. It detects the robot-s shortest path for reaching the target position avoiding obstacles by using a proposed CMCT path planning (CMCT-PP) algorithm. It prevents the collision between robots during moving. The robots interact in an ad hoc wireless network. Simulation results show that the proposed system that consists of CMCT algorithm and its accomplished CMCT-PP algorithm achieves a high improvement in time and distance while performing the required tasks over the already existed algorithms.

Keywords: Ad hoc network, Computer vision based positioning, Dynamic collision avoidance, Multi-robot, Path planning algorithms, Self recharging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787
1862 Generalized Rough Sets Applied to Graphs Related to Urban Problems

Authors: Mihai Rebenciuc, Simona Mihaela Bibic

Abstract:

Branch of modern mathematics, graphs represent instruments for optimization and solving practical applications in various fields such as economic networks, engineering, network optimization, the geometry of social action, generally, complex systems including contemporary urban problems (path or transport efficiencies, biourbanism, & c.). In this paper is studied the interconnection of some urban network, which can lead to a simulation problem of a digraph through another digraph. The simulation is made univoc or more general multivoc. The concepts of fragment and atom are very useful in the study of connectivity in the digraph that is simulation - including an alternative evaluation of k- connectivity. Rough set approach in (bi)digraph which is proposed in premier in this paper contribute to improved significantly the evaluation of k-connectivity. This rough set approach is based on generalized rough sets - basic facts are presented in this paper.

Keywords: (Bi)digraphs, rough set theory, systems of interacting agents, complex systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
1861 Bin Bloom Filter Using Heuristic Optimization Techniques for Spam Detection

Authors: N. Arulanand, K. Premalatha

Abstract:

Bloom filter is a probabilistic and memory efficient data structure designed to answer rapidly whether an element is present in a set. It tells that the element is definitely not in the set but its presence is with certain probability. The trade-off to use Bloom filter is a certain configurable risk of false positives. The odds of a false positive can be made very low if the number of hash function is sufficiently large. For spam detection, weight is attached to each set of elements. The spam weight for a word is a measure used to rate the e-mail. Each word is assigned to a Bloom filter based on its weight. The proposed work introduces an enhanced concept in Bloom filter called Bin Bloom Filter (BBF). The performance of BBF over conventional Bloom filter is evaluated under various optimization techniques. Real time data set and synthetic data sets are used for experimental analysis and the results are demonstrated for bin sizes 4, 5, 6 and 7. Finally analyzing the results, it is found that the BBF which uses heuristic techniques performs better than the traditional Bloom filter in spam detection.

Keywords: Cuckoo search algorithm, levy’s flight, metaheuristic, optimal weight.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262
1860 Optimization of New 25A-size Metal Gasket Design Based on Contact Width Considering Forming and Contact Stress Effect

Authors: Didik Nurhadiyanto , Moch Agus Choiron , Ken Kaminishi , Shigeyuki Haruyama

Abstract:

At the previous study of new metal gasket, contact width and contact stress were important design parameter for optimizing metal gasket performance. However, the range of contact stress had not been investigated thoroughly. In this study, we conducted a gasket design optimization based on an elastic and plastic contact stress analysis considering forming effect using FEM. The gasket model was simulated by using two simulation stages which is forming and tightening simulation. The optimum design based on an elastic and plastic contact stress was founded. Final evaluation was determined by helium leak quantity to check leakage performance of both type of gaskets. The helium leak test shows that a gasket based on the plastic contact stress design better than based on elastic stress design.

Keywords: Contact stress, metal gasket, plastic, elastic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
1859 Optimization of Loudspeaker Part Design Parameters by Air Viscosity Damping Effect

Authors: Yue Hu, Xilu Zhao, Takao Yamaguchi, Manabu Sasajima, Yoshio Koike, Akira Hara

Abstract:

This study optimized the design parameters of a cone loudspeaker as an example of high flexibility of the product design. We developed an acoustic analysis software program that considers the impact of damping caused by air viscosity. In sound reproduction, it is difficult to optimize each parameter of the loudspeaker design. To overcome the limitation of the design problem in practice, this study presents an acoustic analysis algorithm to optimize the design parameters of the loudspeaker. The material character of cone paper and the loudspeaker edge were the design parameters, and the vibration displacement of the cone paper was the objective function. The results of the analysis showed that the design had high accuracy as compared to the predicted value. These results suggested that although the parameter design is difficult, with experience and intuition, the design can be performed easily using the optimized design found with the acoustic analysis software.

Keywords: Air viscosity, design parameters, loudspeaker, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1194
1858 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: Graph computation, Graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 548