Search results for: branch and bound algorithm.
3922 Modeling and Optimization of Micro-Grid Using Genetic Algorithm
Authors: Mehrdad Rezaei, Reza Haghmaram, Nima Amjadi
Abstract:
This paper proposes an operating and cost optimization model for micro-grid (MG). This model takes into account emission costs of NOx, SO2, and CO2, together with the operation and maintenance costs. Wind turbines (WT), photovoltaic (PV) arrays, micro turbines (MT), fuel cells (FC), diesel engine generators (DEG) with different capacities are considered in this model. The aim of the optimization is minimizing operation cost according to constraints, supply demand and safety of the system. The proposed genetic algorithm (GA), with the ability to fine-tune its own settings, is used to optimize the micro-grid operation.Keywords: micro-grid, optimization, genetic algorithm, MG
Procedia PDF Downloads 5093921 Fast and Scale-Adaptive Target Tracking via PCA-SIFT
Authors: Yawen Wang, Hongchang Chen, Shaomei Li, Chao Gao, Jiangpeng Zhang
Abstract:
As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.Keywords: target tracking, PCA-SIFT, mean-shift, scale-adaptive
Procedia PDF Downloads 4313920 Comparative Analysis of Two Different Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem
Authors: Sourabh Joshi, Tarun Sharma, Anurag Sharma
Abstract:
Ant Colony Optimization is heuristic Algorithm which has been proven a successful technique applied on number of combinatorial optimization problems. Two variants of Ant Colony Optimization algorithm named Ant System and Max-Min Ant System are implemented in MATLAB to solve travelling Salesman Problem and the results are compared. In, this paper both systems are analyzed by solving the some Travelling Salesman Problem and depict which system solve the problem better in term of cost and time.Keywords: Ant Colony Optimization, Travelling Salesman Problem, Ant System, Max-Min Ant System
Procedia PDF Downloads 4813919 Identification of the Parameters of a AC Servomotor Using Genetic Algorithm
Authors: J. G. Batista, K. N. Sousa, ¬J. L. Nunes, R. L. S. Sousa, G. A. P. Thé
Abstract:
This work deals with parameter identification of permanent magnet motors, a class of ac motor which is particularly important in industrial automation due to characteristics like applications high performance, are very attractive for applications with limited space and reducing the need to eliminate because they have reduced size and volume and can operate in a wide speed range, without independent ventilation. By using experimental data and genetic algorithm we have been able to extract values for both the motor inductance and the electromechanical coupling constant, which are then compared to measured and/or expected values.Keywords: modeling, AC servomotor, permanent magnet synchronous motor-PMSM, genetic algorithm, vector control, robotic manipulator, control
Procedia PDF Downloads 4693918 Mass Polarization in Three-Body System with Two Identical Particles
Authors: Igor Filikhin, Vladimir M. Suslov, Roman Ya. Kezerashvili, Branislav Vlahivic
Abstract:
The mass-polarization term of the three-body kinetic energy operator is evaluated for different systems which include two identical particles: A+A+B. The term has to be taken into account for the analysis of AB- and AA-interactions based on experimental data for two- and three-body ground state energies. In this study, we present three-body calculations within the framework of a potential model for the kaonic clusters K−K−p and ppK−, nucleus 3H and hypernucleus 6 ΛΛHe. The systems are well clustering as A+ (A+B) with a ground state energy E2 for the pair A+B. The calculations are performed using the method of the Faddeev equations in configuration space. The phenomenological pair potentials were used. We show a correlation between the mass ratio mA/mB and the value δB of the mass-polarization term. For bosonic-like systems, this value is defined as δB = 2E2 − E3, where E3 is three-body energy when VAA = 0. For the systems including three particles with spin(isospin), the models with average AB-potentials are used. In this case, the Faddeev equations become a scalar one like for the bosonic-like system αΛΛ. We show that the additional energy conected with the mass-polarization term can be decomposite to a sum of the two parts: exchenge related and reduced mass related. The state of the system can be described as the following: the particle A1 is bound within the A + B pair with the energy E2, and the second particle A2 is bound with the pair with the energy E3 − E2. Due to the identity of A particles, the particles A1 and A2 are interchangeable in the pair A + B. We shown that the mass polarization δB correlates with a type of AB potential using the system αΛΛ as an example.Keywords: three-body systems, mass polarization, Faddeev equations, nuclear interactions
Procedia PDF Downloads 3753917 An Efficient Algorithm for Global Alignment of Protein-Protein Interaction Networks
Authors: Duc Dong Do, Ngoc Ha Tran, Thanh Hai Dang, Cao Cuong Dang, Xuan Huan Hoang
Abstract:
Global aligning two protein-protein interaction networks is an essentially important task in bioinformatics/computational biology field of study. It is a challenging and widely studied research topic in recent years. Accurately aligned networks allow us to identify functional modules of proteins and/ororthologous proteins from which unknown functions of a protein can be inferred. We here introduce a novel efficient heuristic global network alignment algorithm called FASTAn, including two phases: the first to construct an initial alignment and the second to improve such alignment by exerting a local optimization repeated procedure. The experimental results demonstrated that FASTAn outperformed the state-of-the-art global network alignment algorithm namely SPINAL in terms of both commonly used objective scores and the run-time.Keywords: FASTAn, Heuristic algorithm, biological network alignment, protein-protein interaction networks
Procedia PDF Downloads 6023916 On the convergence of the Mixed Integer Randomized Pattern Search Algorithm
Authors: Ebert Brea
Abstract:
We propose a novel direct search algorithm for identifying at least a local minimum of mixed integer nonlinear unconstrained optimization problems. The Mixed Integer Randomized Pattern Search Algorithm (MIRPSA), so-called by the author, is based on a randomized pattern search, which is modified by the MIRPSA for finding at least a local minimum of our problem. The MIRPSA has two main operations over the randomized pattern search: moving operation and shrinking operation. Each operation is carried out by the algorithm when a set of conditions is held. The convergence properties of the MIRPSA is analyzed using a Markov chain approach, which is represented by an infinite countable set of state space λ, where each state d(q) is defined by a measure of the qth randomized pattern search Hq, for all q in N. According to the algorithm, when a moving operation is carried out on the qth randomized pattern search Hq, the MIRPSA holds its state. Meanwhile, if the MIRPSA carries out a shrinking operation over the qth randomized pattern search Hq, the algorithm will visit the next state, this is, a shrinking operation at the qth state causes a changing of the qth state into (q+1)th state. It is worthwhile pointing out that the MIRPSA never goes back to any visited states because the MIRPSA only visits any qth by shrinking operations. In this article, we describe the MIRPSA for mixed integer nonlinear unconstrained optimization problems for doing a deep study of its convergence properties using Markov chain viewpoint. We herein include a low dimension case for showing more details of the MIRPSA, when the algorithm is used for identifying the minimum of a mixed integer quadratic function. Besides, numerical examples are also shown in order to measure the performance of the MIRPSA.Keywords: direct search, mixed integer optimization, random search, convergence, Markov chain
Procedia PDF Downloads 4673915 Arithmetic Operations Based on Double Base Number Systems
Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan
Abstract:
Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm
Procedia PDF Downloads 3943914 Improved Artificial Bee Colony Algorithm for Non-Convex Economic Power Dispatch Problem
Authors: Badr M. Alshammari, T. Guesmi
Abstract:
This study presents a modified version of the artificial bee colony (ABC) algorithm by including a local search technique for solving the non-convex economic power dispatch problem. The local search step is incorporated at the end of each iteration. Total system losses, valve-point loading effects and prohibited operating zones have been incorporated in the problem formulation. Thus, the problem becomes highly nonlinear and with discontinuous objective function. The proposed technique is validated using an IEEE benchmark system with ten thermal units. Simulation results demonstrate that the proposed optimization algorithm has better convergence characteristics in comparison with the original ABC algorithm.Keywords: economic power dispatch, artificial bee colony, valve-point loading effects, prohibited operating zones
Procedia PDF Downloads 2553913 An Application of Integrated Multi-Objective Particles Swarm Optimization and Genetic Algorithm Metaheuristic through Fuzzy Logic for Optimization of Vehicle Routing Problems in Sugar Industry
Authors: Mukhtiar Singh, Sumeet Nagar
Abstract:
Vehicle routing problem (VRP) is a combinatorial optimization and nonlinear programming problem aiming to optimize decisions regarding given set of routes for a fleet of vehicles in order to provide cost-effective and efficient delivery of both services and goods to the intended customers. This paper proposes the application of integrated particle swarm optimization (PSO) and genetic optimization algorithm (GA) to address the Vehicle routing problem in sugarcane industry in India. Suger industry is very prominent agro-based industry in India due to its impacts on rural livelihood and estimated to be employing around 5 lakhs workers directly in sugar mills. Due to various inadequacies, inefficiencies and inappropriateness associated with the current vehicle routing model it costs huge money loss to the industry which needs to be addressed in proper context. The proposed algorithm utilizes the crossover operation that originally appears in genetic algorithm (GA) to improve its flexibility and manipulation more readily and avoid being trapped in local optimum, and simultaneously for improving the convergence speed of the algorithm, level set theory is also added to it. We employ the hybrid approach to an example of VRP and compare its result with those generated by PSO, GA, and parallel PSO algorithms. The experimental comparison results indicate that the performance of hybrid algorithm is superior to others, and it will become an effective approach for solving discrete combinatory problems.Keywords: fuzzy logic, genetic algorithm, particle swarm optimization, vehicle routing problem
Procedia PDF Downloads 3933912 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 1393911 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 783910 Collocation Method Using Quartic B-Splines for Solving the Modified RLW Equation
Authors: A. A. Soliman
Abstract:
The Modified Regularized Long Wave (MRLW) equation is solved numerically by giving a new algorithm based on collocation method using quartic B-splines at the mid-knot points as element shape. Also, we use the fourth Runge-Kutta method for solving the system of first order ordinary differential equations instead of finite difference method. Our test problems, including the migration and interaction of solitary waves, are used to validate the algorithm which is found to be accurate and efficient. The three invariants of the motion are evaluated to determine the conservation properties of the algorithm. The temporal evaluation of a Maxwellian initial pulse is then studied.Keywords: collocation method, MRLW equation, Quartic B-splines, solitons
Procedia PDF Downloads 3023909 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method
Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri
Abstract:
Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method
Procedia PDF Downloads 5013908 Adsorptive Media Selection for Bilirubin Removal: An Adsorption Equilibrium Study
Authors: Vincenzo Piemonte
Abstract:
The liver is a complex, large-scale biochemical reactor which plays a unique role in the human physiology. When liver ceases to perform its physiological activity, a functional replacement is required. Actually, liver transplantation is the only clinically effective method of treating severe liver disease. Anyway, the aforementioned therapeutic approach is hampered by the disparity between organ availability and the number of patients on the waiting list. In order to overcome this critical issue, research activities focused on liver support device systems (LSDs) designed to bridging patients to transplantation or to keep them alive until the recovery of native liver function. In recirculating albumin dialysis devices, such as MARS (Molecular Adsorbed Recirculating System), adsorption is one of the fundamental steps in albumin-dialysate regeneration. Among the albumin-bound toxins that must be removed from blood during liver-failure therapy, bilirubin and tryptophan can be considered as representative of two different toxin classes. The first one, not water soluble at physiological blood pH and strongly bounded to albumin, the second one, loosely albumin bound and partially water soluble at pH 7.4. Fixed bed units are normally used for this task, and the design of such units requires information both on toxin adsorption equilibrium and kinetics. The most common adsorptive media used in LSDs are activated carbon, non-ionic polymeric resins and anionic resins. In this paper, bilirubin adsorption isotherms on different adsorptive media, such as polymeric resin, albumin-coated resin, anionic resin, activated carbon and alginate beads with entrapped albumin are presented. By comparing all the results, it can be stated that the adsorption capacity for bilirubin of the five different media increases in the following order: Alginate beads < Polymeric resin < Albumin-coated resin < Activated carbon < Anionic resin. The main focus of this paper is to provide useful guidelines for the optimization of liver support devices which implement adsorption columns to remove albumin-bound toxins from albumin dialysate solutions.Keywords: adsorptive media, adsorption equilibrium, artificial liver devices, bilirubin, mathematical modelling
Procedia PDF Downloads 2543907 Information Technology Approaches to Literature Text Analysis
Authors: Ayse Tarhan, Mustafa Ilkan, Mohammad Karimzadeh
Abstract:
Science was considered as part of philosophy in ancient Greece. By the nineteenth century, it was understood that philosophy was very inclusive and that social and human sciences such as literature, history, and psychology should be separated and perceived as an autonomous branch of science. The computer was also first seen as a tool of mathematical science. Over time, computer science has grown by encompassing every area in which technology exists, and its growth compelled the division of computer science into different disciplines, just as philosophy had been divided into different branches of science. Now there is almost no branch of science in which computers are not used. One of the newer autonomous disciplines of computer science is digital humanities, and one of the areas of digital humanities is literature. The material of literature is words, and thanks to the software tools created using computer programming languages, data that a literature researcher would need months to complete, can be achieved quickly and objectively. In this article, three different tools that literary researchers can use in their work will be introduced. These studies were created with the computer programming languages Python and R and brought to the world of literature. The purpose of introducing the aforementioned studies is to set an example for the development of special tools or programs on Ottoman language and literature in the future and to support such initiatives. The first example to be introduced is the Stylometry tool developed with the R language. The other is The Metrical Tool, which is used to measure data in poems and was developed with Python. The latest literature analysis tool in this article is Voyant Tools, which is a multifunctional and easy-to-use tool.Keywords: DH, literature, information technologies, stylometry, the metrical tool, voyant tools
Procedia PDF Downloads 1503906 A Hybrid Distributed Algorithm for Solving Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a distributed hybrid algorithm is proposed for solving the job shop scheduling problem. The suggested method executes different artificial neural networks, heuristics and meta-heuristics simultaneously on more than one machine. The neural networks are used to control the constraints of the problem while the meta-heuristics search the global space and the heuristics are used to prevent the premature convergence. To attain an efficient distributed intelligent method for solving big and distributed job shop scheduling problems, Apache Spark and Hadoop frameworks are used. In the algorithm implementation and design steps, new approaches are applied. Comparison between the proposed algorithm and other efficient algorithms from the literature shows its efficiency, which is able to solve large size problems in short time.Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, neural network
Procedia PDF Downloads 3863905 Robust Data Image Watermarking for Data Security
Authors: Harsh Vikram Singh, Ankur Rai, Anand Mohan
Abstract:
In this paper, we propose secure and robust data hiding algorithm based on DCT by Arnold transform and chaotic sequence. The watermark image is scrambled by Arnold cat map to increases its security and then the chaotic map is used for watermark signal spread in middle band of DCT coefficients of the cover image The chaotic map can be used as pseudo-random generator for digital data hiding, to increase security and robustness .Performance evaluation for robustness and imperceptibility of proposed algorithm has been made using bit error rate (BER), normalized correlation (NC), and peak signal to noise ratio (PSNR) value for different watermark and cover images such as Lena, Girl, Tank images and gain factor .We use a binary logo image and text image as watermark. The experimental results demonstrate that the proposed algorithm achieves higher security and robustness against JPEG compression as well as other attacks such as addition of noise, low pass filtering and cropping attacks compared to other existing algorithm using DCT coefficients. Moreover, to recover watermarks in proposed algorithm, there is no need to original cover image.Keywords: data hiding, watermarking, DCT, chaotic sequence, arnold transforms
Procedia PDF Downloads 5133904 Wait-Optimized Scheduler Algorithm for Efficient Process Scheduling in Computer Systems
Authors: Md Habibur Rahman, Jaeho Kim
Abstract:
Efficient process scheduling is a crucial factor in ensuring optimal system performance and resource utilization in computer systems. While various algorithms have been proposed over the years, there are still limitations to their effectiveness. This paper introduces a new Wait-Optimized Scheduler (WOS) algorithm that aims to minimize process waiting time by dividing them into two layers and considering both process time and waiting time. The WOS algorithm is non-preemptive and prioritizes processes with the shortest WOS. In the first layer, each process runs for a predetermined duration, and any unfinished process is subsequently moved to the second layer, resulting in a decrease in response time. Whenever the first layer is free or the number of processes in the second layer is twice that of the first layer, the algorithm sorts all the processes in the second layer based on their remaining time minus waiting time and sends one process to the first layer to run. This ensures that all processes eventually run, optimizing waiting time. To evaluate the performance of the WOS algorithm, we conducted experiments comparing its performance with traditional scheduling algorithms such as First-Come-First-Serve (FCFS) and Shortest-Job-First (SJF). The results showed that the WOS algorithm outperformed the traditional algorithms in reducing the waiting time of processes, particularly in scenarios with a large number of short tasks with long wait times. Our study highlights the effectiveness of the WOS algorithm in improving process scheduling efficiency in computer systems. By reducing process waiting time, the WOS algorithm can improve system performance and resource utilization. The findings of this study provide valuable insights for researchers and practitioners in developing and implementing efficient process scheduling algorithms.Keywords: process scheduling, wait-optimized scheduler, response time, non-preemptive, waiting time, traditional scheduling algorithms, first-come-first-serve, shortest-job-first, system performance, resource utilization
Procedia PDF Downloads 903903 Comparison of ANFIS Update Methods Using Genetic Algorithm, Particle Swarm Optimization, and Artificial Bee Colony
Authors: Michael R. Phangtriastu, Herriyandi Herriyandi, Diaz D. Santika
Abstract:
This paper presents a comparison of the implementation of metaheuristic algorithms to train the antecedent parameters and consequence parameters in the adaptive network-based fuzzy inference system (ANFIS). The algorithms compared are genetic algorithm (GA), particle swarm optimization (PSO), and artificial bee colony (ABC). The objective of this paper is to benchmark well-known metaheuristic algorithms. The algorithms are applied to several data set with different nature. The combinations of the algorithms' parameters are tested. In all algorithms, a different number of populations are tested. In PSO, combinations of velocity are tested. In ABC, a different number of limit abandonment are tested. Experiments find out that ABC is more reliable than other algorithms, ABC manages to get better mean square error (MSE) than other algorithms in all data set.Keywords: ANFIS, artificial bee colony, genetic algorithm, metaheuristic algorithm, particle swarm optimization
Procedia PDF Downloads 3513902 An Efficient Strategy for Relay Selection in Multi-Hop Communication
Authors: Jung-In Baik, Seung-Jun Yu, Young-Min Ko, Hyoung-Kyu Song
Abstract:
This paper proposes an efficient relaying algorithm to obtain diversity for improving the reliability of a signal. The algorithm achieves time or space diversity gain by multiple versions of the same signal through two routes. Relays are separated between a source and destination. The routes between the source and destination are set adaptive in order to deal with different channels and noises. The routes consist of one or more relays and the source transmits its signal to the destination through the routes. The signals from the relays are combined and detected at the destination. The proposed algorithm provides a better performance than the conventional algorithms in bit error rate (BER).Keywords: multi-hop, OFDM, relay, relaying selection
Procedia PDF Downloads 4433901 An Automated Optimal Robotic Assembly Sequence Planning Using Artificial Bee Colony Algorithm
Authors: Balamurali Gunji, B. B. V. L. Deepak, B. B. Biswal, Amrutha Rout, Golak Bihari Mohanta
Abstract:
Robots play an important role in the operations like pick and place, assembly, spot welding and much more in manufacturing industries. Out of those, assembly is a very important process in manufacturing, where 20% of manufacturing cost is wholly occupied by the assembly process. To do the assembly task effectively, Assembly Sequences Planning (ASP) is required. ASP is one of the multi-objective non-deterministic optimization problems, achieving the optimal assembly sequence involves huge search space and highly complex in nature. Many researchers have followed different algorithms to solve ASP problem, which they have several limitations like the local optimal solution, huge search space, and execution time is more, complexity in applying the algorithm, etc. By keeping the above limitations in mind, in this paper, a new automated optimal robotic assembly sequence planning using Artificial Bee Colony (ABC) Algorithm is proposed. In this algorithm, automatic extraction of assembly predicates is done using Computer Aided Design (CAD) interface instead of extracting the assembly predicates manually. Due to this, the time of extraction of assembly predicates to obtain the feasible assembly sequence is reduced. The fitness evaluation of the obtained feasible sequence is carried out using ABC algorithm to generate the optimal assembly sequence. The proposed methodology is applied to different industrial products and compared the results with past literature.Keywords: assembly sequence planning, CAD, artificial Bee colony algorithm, assembly predicates
Procedia PDF Downloads 2353900 A New Optimization Algorithm for Operation of a Microgrid
Authors: Sirus Mohammadi, Rohala Moghimi
Abstract:
The main advantages of microgrids are high energy efficiency through the application of Combined Heat and Power (CHP), high quality and reliability of the delivered electric energy and environmental and economic advantages. This study presents an energy management system (EMS) to optimize the operation of the microgrid (MG). In this paper an Adaptive Modified Firefly Algorithm (AMFA) is presented for optimal operation of a typical MG with renewable energy sources (RESs) accompanied by a back-up Micro-Turbine/Fuel Cell/Battery hybrid power source to level the power mismatch or to store the energy surplus when it’s needed. The problem is formulated as a nonlinear constraint problem to minimize the total operating cost. The management of Energy storage system (ESS), economic load dispatch and operation optimization of distributed generation (DG) are simplified into a single-object optimization problem in the EMS. The proposed algorithm is tested on a typical grid-connected MG including WT/PV/Micro Turbine/Fuel Cell and Energy Storage Devices (ESDs) then its superior performance is compared with those from other evolutionary algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Fuzzy Self Adaptive PSO (FSAPSO), Chaotic Particle PSO (CPSO), Adaptive Modified PSO (AMPSO), and Firefly Algorithm (FA).Keywords: microgrid, operation management, optimization, firefly algorithm (AMFA)
Procedia PDF Downloads 3393899 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing
Procedia PDF Downloads 4263898 Novel Algorithm for Restoration of Retina Images
Authors: P. Subbuthai, S. Muruganand
Abstract:
Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation.Keywords: fundus retina image, diabetic retinopathy, median filter, microaneurysms, exudates
Procedia PDF Downloads 3403897 A Research and Application of Feature Selection Based on IWO and Tabu Search
Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu
Abstract:
Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.Keywords: intrusion detection, feature selection, iwo, tabu search
Procedia PDF Downloads 5283896 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification
Authors: Rujia Chen, Ajit Narayanan
Abstract:
Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels
Procedia PDF Downloads 1853895 An Accurate Method for Phylogeny Tree Reconstruction Based on a Modified Wild Dog Algorithm
Authors: Essam Al Daoud
Abstract:
This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.Keywords: least square, neighbor joining, phylogenetic tree, wild dog pack
Procedia PDF Downloads 3183894 Identification of Soft Faults in Branched Wire Networks by Distributed Reflectometry and Multi-Objective Genetic Algorithm
Authors: Soumaya Sallem, Marc Olivas
Abstract:
This contribution presents a method for detecting, locating, and characterizing soft faults in a complex wired network. The proposed method is based on multi-carrier reflectometry MCTDR (Multi-Carrier Time Domain Reflectometry) combined with a multi-objective genetic algorithm. In order to ensure complete network coverage and eliminate diagnosis ambiguities, the MCTDR test signal is injected at several points on the network, and the data is merged between different reflectometers (sensors) distributed on the network. An adapted multi-objective genetic algorithm is used to merge data in order to obtain more accurate faults location and characterization. The proposed method performances are evaluated from numerical and experimental results.Keywords: wired network, reflectometry, network distributed diagnosis, multi-objective genetic algorithm
Procedia PDF Downloads 1933893 The Interdisciplinary Synergy Between Computer Engineering and Mathematics
Authors: Mitat Uysal, Aynur Uysal
Abstract:
Computer engineering and mathematics share a deep and symbiotic relationship, with mathematics providing the foundational theories and models for computer engineering advancements. From algorithm development to optimization techniques, mathematics plays a pivotal role in solving complex computational problems. This paper explores key mathematical principles that underpin computer engineering, illustrating their significance through a case study that demonstrates the application of optimization techniques using Python code. The case study addresses the well-known vehicle routing problem (VRP), an extension of the traveling salesman problem (TSP), and solves it using a genetic algorithm.Keywords: VRP, TSP, genetic algorithm, computer engineering, optimization
Procedia PDF Downloads 12