Search results for: algorithm model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18811

Search results for: algorithm model

18241 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments

Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing

Abstract:

Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.

Keywords: central composite design, CO2 liquefaction, latin hypercube sampling, simulation-based optimization

Procedia PDF Downloads 156
18240 Inverse Mapping of Weld Bead Geometry in Shielded Metal Arc-Welding: Genetic Algorithm Approach

Authors: D. S. Nagesh, G. L. Datta

Abstract:

In the field of welding, various studies had been made by some of the previous investigators to predict as well as optimize weld bead geometric descriptors. Modeling of weld bead shape is important for predicting the quality of welds. In most of the cases, design of experiments technique to postulate multiple linear regression equations have been used. Nowadays, Genetic Algorithm (GA) an intelligent information treatment system with the characteristics of treating complex relationships as seen in welding processes used as a tool for inverse mapping/optimization of the process is attempted.

Keywords: smaw, genetic algorithm, bead geometry, optimization/inverse mapping

Procedia PDF Downloads 442
18239 Genetic Algorithm Approach for Inverse Mapping of Weld Bead Geometry in Shielded Metal Arc-Welding

Authors: D. S. Nagesh, G. L. Datta

Abstract:

In the field of welding, various studies had been made by some of the previous investigators to predict as well as optimize weld bead geometric descriptors. Modeling of weld bead shape is important for predicting the quality of welds. In most of the cases design of experiments technique to postulate multiple linear regression equations have been used. Nowadays Genetic Algorithm (GA) an intelligent information treatment system with the characteristics of treating complex relationships as seen in welding processes used as a tool for inverse mapping/optimization of the process is attempted.

Keywords: SMAW, genetic algorithm, bead geometry, optimization/inverse mapping

Procedia PDF Downloads 413
18238 Maximum Efficiency of the Photovoltaic Cells Using a Genetic Algorithm

Authors: Latifa Sabri, Mohammed Benzirar, Mimoun Zazoui

Abstract:

The installation of photovoltaic systems is one of future sources to generate electricity without emitting pollutants. The photovoltaic cells used in these systems have demonstrated enormous efficiencies and advantages. Several researches have discussed the maximum efficiency of these technologies, but only a few experiences have succeeded to right weather conditions to get these results. In this paper, two types of cells were selected: crystalline and amorphous silicon. Using the method of genetic algorithm, the results show that for an ambient temperature of 25°C and direct irradiation of 625 W/m², the efficiency of crystalline silicon is 12% and 5% for amorphous silicon.

Keywords: PV, maximum efficiency, solar cell, genetic algorithm

Procedia PDF Downloads 411
18237 A Comparative Analysis on QRS Peak Detection Using BIOPAC and MATLAB Software

Authors: Chandra Mukherjee

Abstract:

The present paper is a representation of the work done in the field of ECG signal analysis using MATLAB 7.1 Platform. An accurate and simple ECG feature extraction algorithm is presented in this paper and developed algorithm is validated using BIOPAC software. To detect the QRS peak, ECG signal is processed by following mentioned stages- First Derivative, Second Derivative and then squaring of that second derivative. Efficiency of developed algorithm is tested on ECG samples from different database and real time ECG signals acquired using BIOPAC system. Firstly we have lead wise specified threshold value the samples above that value is marked and in the original signal, where these marked samples face change of slope are spotted as R-peak. On the left and right side of the R-peak, faces change of slope identified as Q and S peak, respectively. Now the inbuilt Detection algorithm of BIOPAC software is performed on same output sample and both outputs are compared. ECG baseline modulation correction is done after detecting characteristics points. The efficiency of the algorithm is tested using some validation parameters like Sensitivity, Positive Predictivity and we got satisfied value of these parameters.

Keywords: first derivative, variable threshold, slope reversal, baseline modulation correction

Procedia PDF Downloads 400
18236 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 55
18235 A Gene Selection Algorithm for Microarray Cancer Classification Using an Improved Particle Swarm Optimization

Authors: Arfan Ali Nagra, Tariq Shahzad, Meshal Alharbi, Khalid Masood Khan, Muhammad Mugees Asif, Taher M. Ghazal, Khmaies Ouahada

Abstract:

Gene selection is an essential step for the classification of microarray cancer data. Gene expression cancer data (DNA microarray) facilitates computing the robust and concurrent expression of various genes. Particle swarm optimization (PSO) requires simple operators and less number of parameters for tuning the model in gene selection. The selection of a prognostic gene with small redundancy is a great challenge for the researcher as there are a few complications in PSO based selection method. In this research, a new variant of PSO (Self-inertia weight adaptive PSO) has been proposed. In the proposed algorithm, SIW-APSO-ELM is explored to achieve gene selection prediction accuracies. This new algorithm balances the exploration capabilities of the improved inertia weight adaptive particle swarm optimization and the exploitation. The self-inertia weight adaptive particle swarm optimization (SIW-APSO) is used to search the solution. The SIW-APSO is updated with an evolutionary process in such a way that each particle iteratively improves its velocities and positions. The extreme learning machine (ELM) has been designed for the selection procedure. The proposed method has been to identify a number of genes in the cancer dataset. The classification algorithm contains ELM, K- centroid nearest neighbor (KCNN), and support vector machine (SVM) to attain high forecast accuracy as compared to the start-of-the-art methods on microarray cancer datasets that show the effectiveness of the proposed method.

Keywords: microarray cancer, improved PSO, ELM, SVM, evolutionary algorithms

Procedia PDF Downloads 72
18234 Channels Splitting Strategy for Optical Local Area Networks of Passive Star Topology

Authors: Peristera Baziana

Abstract:

In this paper, we present a network configuration for a WDM LANs of passive star topology that assume that the set of data WDM channels is split into two separate sets of channels, with different access rights over them. Especially, a synchronous transmission WDMA access algorithm is adopted in order to increase the probability of successful transmission over the data channels and consequently to reduce the probability of data packets transmission cancellation in order to avoid the data channels collisions. Thus, a control pre-transmission access scheme is followed over a separate control channel. An analytical Markovian model is studied and the average throughput is mathematically derived. The performance is studied for several numbers of data channels and various values of control phase duration.

Keywords: access algorithm, channels division, collisions avoidance, wavelength division multiplexing

Procedia PDF Downloads 282
18233 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic

Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam

Abstract:

In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.

Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic

Procedia PDF Downloads 321
18232 Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork

Authors: Toni Maristela C. Estabillo, Michaela V. Matienzo, Mikaela L. Sabangan, Rosette M. Tienzo, Justine L. Bahinting

Abstract:

This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.

Keywords: blind watermarking, discrete wavelet transform algorithm, patchwork algorithm, digital watermark

Procedia PDF Downloads 258
18231 An Optimal Bayesian Maintenance Policy for a Partially Observable System Subject to Two Failure Modes

Authors: Akram Khaleghei Ghosheh Balagh, Viliam Makis, Leila Jafari

Abstract:

In this paper, we present a new maintenance model for a partially observable system subject to two failure modes, namely a catastrophic failure and a failure due to the system degradation. The system is subject to condition monitoring and the degradation process is described by a hidden Markov model. A cost-optimal Bayesian control policy is developed for maintaining the system. The control problem is formulated in the semi-Markov decision process framework. An effective computational algorithm is developed and illustrated by a numerical example.

Keywords: partially observable system, hidden Markov model, competing risks, multivariate Bayesian control

Procedia PDF Downloads 441
18230 Artificial Neural Network Approach for Modeling Very Short-Term Wind Speed Prediction

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Juan C. Seck-Tuoh-Mora, Norberto Hernandez-Romero, Irving Barragán-Vite

Abstract:

Wind speed forecasting is an important issue for planning wind power generation facilities. The accuracy in the wind speed prediction allows a good performance of wind turbines for electricity generation. A model based on artificial neural networks is presented in this work. A dataset with atmospheric information about air temperature, atmospheric pressure, wind direction, and wind speed in Pachuca, Hidalgo, México, was used to train the artificial neural network. The data was downloaded from the web page of the National Meteorological Service of the Mexican government. The records were gathered for three months, with time intervals of ten minutes. This dataset was used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The model with the best performance contains three hidden layers and 9, 6, and 5 neurons, respectively; and the coefficient of determination obtained was r²=0.9414, and the Root Mean Squared Error is 1.0559. In summary, the ANN approach is suitable to predict the wind speed in Pachuca City because the r² value denotes a good fitting of gathered records, and the obtained ANN model can be used in the planning of wind power generation grids.

Keywords: wind power generation, artificial neural networks, wind speed, coefficient of determination

Procedia PDF Downloads 104
18229 Fast and Scale-Adaptive Target Tracking via PCA-SIFT

Authors: Yawen Wang, Hongchang Chen, Shaomei Li, Chao Gao, Jiangpeng Zhang

Abstract:

As the main challenge for target tracking is accounting for target scale change and real-time, we combine Mean-Shift and PCA-SIFT algorithm together to solve the problem. We introduce similarity comparison method to determine how the target scale changes, and taking different strategies according to different situation. For target scale getting larger will cause location error, we employ backward tracking to reduce the error. Mean-Shift algorithm has poor performance when tracking scale-changing target due to the fixed bandwidth of its kernel function. In order to overcome this problem, we introduce PCA-SIFT matching. Through key point matching between target and template, that adjusting the scale of tracking window adaptively can be achieved. Because this algorithm is sensitive to wrong match, we introduce RANSAC to reduce mismatch as far as possible. Furthermore target relocating will trigger when number of match is too small. In addition we take comprehensive consideration about target deformation and error accumulation to put forward a new template update method. Experiments on five image sequences and comparison with 6 kinds of other algorithm demonstrate favorable performance of the proposed tracking algorithm.

Keywords: target tracking, PCA-SIFT, mean-shift, scale-adaptive

Procedia PDF Downloads 422
18228 Random Subspace Ensemble of CMAC Classifiers

Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi

Abstract:

The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.

Keywords: classification, random subspace, ensemble, CMAC neural network

Procedia PDF Downloads 318
18227 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 38
18226 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 211
18225 Comparative Analysis of Two Different Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

Authors: Sourabh Joshi, Tarun Sharma, Anurag Sharma

Abstract:

Ant Colony Optimization is heuristic Algorithm which has been proven a successful technique applied on number of combinatorial optimization problems. Two variants of Ant Colony Optimization algorithm named Ant System and Max-Min Ant System are implemented in MATLAB to solve travelling Salesman Problem and the results are compared. In, this paper both systems are analyzed by solving the some Travelling Salesman Problem and depict which system solve the problem better in term of cost and time.

Keywords: Ant Colony Optimization, Travelling Salesman Problem, Ant System, Max-Min Ant System

Procedia PDF Downloads 467
18224 Identification of the Parameters of a AC Servomotor Using Genetic Algorithm

Authors: J. G. Batista, K. N. Sousa, ¬J. L. Nunes, R. L. S. Sousa, G. A. P. Thé

Abstract:

This work deals with parameter identification of permanent magnet motors, a class of ac motor which is particularly important in industrial automation due to characteristics like applications high performance, are very attractive for applications with limited space and reducing the need to eliminate because they have reduced size and volume and can operate in a wide speed range, without independent ventilation. By using experimental data and genetic algorithm we have been able to extract values for both the motor inductance and the electromechanical coupling constant, which are then compared to measured and/or expected values.

Keywords: modeling, AC servomotor, permanent magnet synchronous motor-PMSM, genetic algorithm, vector control, robotic manipulator, control

Procedia PDF Downloads 455
18223 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 290
18222 An Efficient Algorithm for Global Alignment of Protein-Protein Interaction Networks

Authors: Duc Dong Do, Ngoc Ha Tran, Thanh Hai Dang, Cao Cuong Dang, Xuan Huan Hoang

Abstract:

Global aligning two protein-protein interaction networks is an essentially important task in bioinformatics/computational biology field of study. It is a challenging and widely studied research topic in recent years. Accurately aligned networks allow us to identify functional modules of proteins and/ororthologous proteins from which unknown functions of a protein can be inferred. We here introduce a novel efficient heuristic global network alignment algorithm called FASTAn, including two phases: the first to construct an initial alignment and the second to improve such alignment by exerting a local optimization repeated procedure. The experimental results demonstrated that FASTAn outperformed the state-of-the-art global network alignment algorithm namely SPINAL in terms of both commonly used objective scores and the run-time.

Keywords: FASTAn, Heuristic algorithm, biological network alignment, protein-protein interaction networks

Procedia PDF Downloads 590
18221 On the convergence of the Mixed Integer Randomized Pattern Search Algorithm

Authors: Ebert Brea

Abstract:

We propose a novel direct search algorithm for identifying at least a local minimum of mixed integer nonlinear unconstrained optimization problems. The Mixed Integer Randomized Pattern Search Algorithm (MIRPSA), so-called by the author, is based on a randomized pattern search, which is modified by the MIRPSA for finding at least a local minimum of our problem. The MIRPSA has two main operations over the randomized pattern search: moving operation and shrinking operation. Each operation is carried out by the algorithm when a set of conditions is held. The convergence properties of the MIRPSA is analyzed using a Markov chain approach, which is represented by an infinite countable set of state space λ, where each state d(q) is defined by a measure of the qth randomized pattern search Hq, for all q in N. According to the algorithm, when a moving operation is carried out on the qth randomized pattern search Hq, the MIRPSA holds its state. Meanwhile, if the MIRPSA carries out a shrinking operation over the qth randomized pattern search Hq, the algorithm will visit the next state, this is, a shrinking operation at the qth state causes a changing of the qth state into (q+1)th state. It is worthwhile pointing out that the MIRPSA never goes back to any visited states because the MIRPSA only visits any qth by shrinking operations. In this article, we describe the MIRPSA for mixed integer nonlinear unconstrained optimization problems for doing a deep study of its convergence properties using Markov chain viewpoint. We herein include a low dimension case for showing more details of the MIRPSA, when the algorithm is used for identifying the minimum of a mixed integer quadratic function. Besides, numerical examples are also shown in order to measure the performance of the MIRPSA.

Keywords: direct search, mixed integer optimization, random search, convergence, Markov chain

Procedia PDF Downloads 456
18220 Robust Speed Sensorless Control to Estimated Error for PMa-SynRM

Authors: Kyoung-Jin Joo, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee

Abstract:

Recently, the permanent magnet-assisted synchronous reluctance motor (PMa-SynRM) that can be substituted for the induction motor has been studying because of the needs of the development of the premium high efficiency motor for the minimum energy performance standard (MEPS). PMa-SynRM is required to the speed and position information for motor speed and torque controls. However, to apply the sensors has many problems that are sensor mounting space shortage and additional cost, etc. Therefore, in this paper, speed-sensorless control based on model reference adaptive system (MRAS) is introduced to eliminate the sensor. The sensorless method is constructed in a reference model as standard and an adaptive model as the state observer. The proposed algorithm is verified by the simulation.

Keywords: PMa-SynRM, sensorless control, robust estimation, MRAS method

Procedia PDF Downloads 388
18219 Inventory Decisions for Perishable Products with Age and Stock Dependent Demand Rate

Authors: Maher Agi, Hardik Soni

Abstract:

This paper presents a deterministic model for optimized control of the inventory of a perishable product subject to both physical deterioration and degradation of its freshness condition. The demand for the product depends on its current inventory level and freshness condition. Our model allows for any positive amount of end of cycle inventory. Some useful conditions that characterize the optimal solution of the model are derived and an algorithm is presented for finding the optimal values of the price, the inventory cycle, the end of cycle inventory level and the order quantity. Numerical examples are then given. Our work shows how the product freshness in conjunction with the inventory deterioration affects the inventory management decisions.

Keywords: inventory management, lot sizing, perishable products, deteriorating inventory, age-dependent demand, stock-dependent demand

Procedia PDF Downloads 225
18218 Arithmetic Operations Based on Double Base Number Systems

Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan

Abstract:

Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.

Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm

Procedia PDF Downloads 388
18217 Improved Artificial Bee Colony Algorithm for Non-Convex Economic Power Dispatch Problem

Authors: Badr M. Alshammari, T. Guesmi

Abstract:

This study presents a modified version of the artificial bee colony (ABC) algorithm by including a local search technique for solving the non-convex economic power dispatch problem. The local search step is incorporated at the end of each iteration. Total system losses, valve-point loading effects and prohibited operating zones have been incorporated in the problem formulation. Thus, the problem becomes highly nonlinear and with discontinuous objective function. The proposed technique is validated using an IEEE benchmark system with ten thermal units. Simulation results demonstrate that the proposed optimization algorithm has better convergence characteristics in comparison with the original ABC algorithm.

Keywords: economic power dispatch, artificial bee colony, valve-point loading effects, prohibited operating zones

Procedia PDF Downloads 245
18216 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 131
18215 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization

Authors: Xiongxiong You, Zhanwen Niu

Abstract:

Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.

Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms

Procedia PDF Downloads 131
18214 Forecasting Optimal Production Program Using Profitability Optimization by Genetic Algorithm and Neural Network

Authors: Galal H. Senussi, Muamar Benisa, Sanja Vasin

Abstract:

In our business field today, one of the most important issues for any enterprises is cost minimization and profit maximization. Second issue is how to develop a strong and capable model that is able to give us desired forecasting of these two issues. Many researches deal with these issues using different methods. In this study, we developed a model for multi-criteria production program optimization, integrated with Artificial Neural Network. The prediction of the production cost and profit per unit of a product, dealing with two obverse functions at same time can be extremely difficult, especially if there is a great amount of conflict information about production parameters. Feed-Forward Neural Networks are suitable for generalization, which means that the network will generate a proper output as a result to input it has never seen. Therefore, with small set of examples the network will adjust its weight coefficients so the input will generate a proper output. This essential characteristic is of the most important abilities enabling this network to be used in variety of problems spreading from engineering to finance etc. From our results as we will see later, Feed-Forward Neural Networks has a strong ability and capability to map inputs into desired outputs.

Keywords: project profitability, multi-objective optimization, genetic algorithm, Pareto set, neural networks

Procedia PDF Downloads 430
18213 Collocation Method Using Quartic B-Splines for Solving the Modified RLW Equation

Authors: A. A. Soliman

Abstract:

The Modified Regularized Long Wave (MRLW) equation is solved numerically by giving a new algorithm based on collocation method using quartic B-splines at the mid-knot points as element shape. Also, we use the fourth Runge-Kutta method for solving the system of first order ordinary differential equations instead of finite difference method. Our test problems, including the migration and interaction of solitary waves, are used to validate the algorithm which is found to be accurate and efficient. The three invariants of the motion are evaluated to determine the conservation properties of the algorithm. The temporal evaluation of a Maxwellian initial pulse is then studied.

Keywords: collocation method, MRLW equation, Quartic B-splines, solitons

Procedia PDF Downloads 293
18212 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method

Procedia PDF Downloads 487