Search results for: ant colony optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6241

Search results for: ant colony optimization algorithm

5281 Optimal Location of Unified Power Flow Controller (UPFC) for Transient Stability: Improvement Using Genetic Algorithm (GA)

Authors: Basheer Idrees Balarabe, Aminu Hamisu Kura, Nabila Shehu

Abstract:

As the power demand rapidly increases, the generation and transmission systems are affected because of inadequate resources, environmental restrictions and other losses. The role of transient stability control in maintaining the steady-state operation in the occurrence of large disturbance and fault is to describe the ability of the power system to survive serious contingency in time. The application of a Unified power flow controller (UPFC) plays a vital role in controlling the active and reactive power flows in a transmission line. In this research, a genetic algorithm (GA) method is applied to determine the optimal location of the UPFC device in a power system network for the enhancement of the power-system Transient Stability. Optimal location of UPFC has Significantly Improved the transient stability, the damping oscillation and reduced the peak over shoot. The GA optimization Technique proposed was iteratively searches the optimal location of UPFC and maintains the unusual bus voltages within the satisfy limits. The result indicated that transient stability is improved and achieved the faster steady state. Simulations were performed on the IEEE 14 Bus test systems using the MATLAB/Simulink platform.

Keywords: UPFC, transient stability, GA, IEEE, MATLAB and SIMULINK

Procedia PDF Downloads 11
5280 Conception of a Regulated, Dynamic and Intelligent Sewerage in Ostrevent

Authors: Rabaa Tlili Yaakoubi, Hind Nakouri, Olivier Blanpain

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of the CARDIO project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 40 to 100%. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 60% of total volume rejected to the natural environment and of 80 % in the number of discharges.

Keywords: RTC, paradigm, optimization, automation

Procedia PDF Downloads 282
5279 Arithmetic Operations Based on Double Base Number Systems

Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan

Abstract:

Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.

Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm

Procedia PDF Downloads 394
5278 Integrated Simulation and Optimization for Carbon Capture and Storage System

Authors: Taekyoon Park, Seokgoo Lee, Sungho Kim, Ung Lee, Jong Min Lee, Chonghun Han

Abstract:

CO2 capture and storage/sequestration (CCS) is a key technology for addressing the global warming issue. This paper proposes an integrated model for the whole chain of CCS, from a power plant to a reservoir. The integrated model is further utilized to determine optimal operating conditions and study responses to various changes in input variables.

Keywords: CCS, caron dioxide, carbon capture and storage, simulation, optimization

Procedia PDF Downloads 348
5277 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 444
5276 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 139
5275 An Efficient Process Analysis and Control Method for Tire Mixing Operation

Authors: Hwang Ho Kim, Do Gyun Kim, Jin Young Choi, Sang Chul Park

Abstract:

Since tire production process is very complicated, company-wide management of it is very difficult, necessitating considerable amounts of capital and labors. Thus, productivity should be enhanced and maintained competitive by developing and applying effective production plans. Among major processes for tire manufacturing, consisting of mixing component preparation, building and curing, the mixing process is an essential and important step because the main component of tire, called compound, is formed at this step. Compound as a rubber synthesis with various characteristics plays its own role required for a tire as a finished product. Meanwhile, scheduling tire mixing process is similar to flexible job shop scheduling problem (FJSSP) because various kinds of compounds have their unique orders of operations, and a set of alternative machines can be used to process each operation. In addition, setup time required for different operations may differ due to alteration of additives. In other words, each operation of mixing processes requires different setup time depending on the previous one, and this kind of feature, called sequence dependent setup time (SDST), is a very important issue in traditional scheduling problems such as flexible job shop scheduling problems. However, despite of its importance, there exist few research works dealing with the tire mixing process. Thus, in this paper, we consider the scheduling problem for tire mixing process and suggest an efficient particle swarm optimization (PSO) algorithm to minimize the makespan for completing all the required jobs belonging to the process. Specifically, we design a particle encoding scheme for the considered scheduling problem, including a processing sequence for compounds and machine allocation information for each job operation, and a method for generating a tire mixing schedule from a given particle. At each iteration, the coordination and velocity of particles are updated, and the current solution is compared with new solution. This procedure is repeated until a stopping condition is satisfied. The performance of the proposed algorithm is validated through a numerical experiment by using some small-sized problem instances expressing the tire mixing process. Furthermore, we compare the solution of the proposed algorithm with it obtained by solving a mixed integer linear programming (MILP) model developed in previous research work. As for performance measure, we define an error rate which can evaluate the difference between two solutions. As a result, we show that PSO algorithm proposed in this paper outperforms MILP model with respect to the effectiveness and efficiency. As the direction for future work, we plan to consider scheduling problems in other processes such as building, curing. We can also extend our current work by considering other performance measures such as weighted makespan or processing times affected by aging or learning effects.

Keywords: compound, error rate, flexible job shop scheduling problem, makespan, particle encoding scheme, particle swarm optimization, sequence dependent setup time, tire mixing process

Procedia PDF Downloads 265
5274 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique

Authors: N. Guo, C. Xu, Z. C. Yang

Abstract:

In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.

Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search

Procedia PDF Downloads 159
5273 Stimuli Responsives of Crosslinked Poly on 2-HydroxyEthyl MethAcrylate – Optimization of Parameters by Experimental Design

Authors: Tewfik Bouchaour, Salah Hamri, Yasmina Houda Bendahma, Ulrich Maschke

Abstract:

Stimuli-responsive materials based on UV crosslinked acrylic polymer networks are fabricated. A various kinds of polymeric systems, hydrophilic polymers based on 2-Hydroxyethyl methacrylate have been widely studied because of their ability to simulate biological tissues, which leads to many applications. The acrylic polymer network PHEMA developed by UV photopolymerization has been used for dye retention. For these so-called smart materials, the properties change in response to an external stimulus. In this contribution, we report the influence of some parameters (initial composition, temperature, and nature of components) in the properties of final materials. Optimization of different parameters is examined by experimental design.

Keywords: UV photo-polymerization, PHEMA, external stimulus, optimization

Procedia PDF Downloads 255
5272 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design

Authors: H. K. Esfahani, B. Datta

Abstract:

Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.

Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site

Procedia PDF Downloads 230
5271 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM

Authors: Yong Min You

Abstract:

The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.

Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple

Procedia PDF Downloads 273
5270 Collocation Method Using Quartic B-Splines for Solving the Modified RLW Equation

Authors: A. A. Soliman

Abstract:

The Modified Regularized Long Wave (MRLW) equation is solved numerically by giving a new algorithm based on collocation method using quartic B-splines at the mid-knot points as element shape. Also, we use the fourth Runge-Kutta method for solving the system of first order ordinary differential equations instead of finite difference method. Our test problems, including the migration and interaction of solitary waves, are used to validate the algorithm which is found to be accurate and efficient. The three invariants of the motion are evaluated to determine the conservation properties of the algorithm. The temporal evaluation of a Maxwellian initial pulse is then studied.

Keywords: collocation method, MRLW equation, Quartic B-splines, solitons

Procedia PDF Downloads 302
5269 Influence of Parameters of Modeling and Data Distribution for Optimal Condition on Locally Weighted Projection Regression Method

Authors: Farhad Asadi, Mohammad Javad Mollakazemi, Aref Ghafouri

Abstract:

Recent research in neural networks science and neuroscience for modeling complex time series data and statistical learning has focused mostly on learning from high input space and signals. Local linear models are a strong choice for modeling local nonlinearity in data series. Locally weighted projection regression is a flexible and powerful algorithm for nonlinear approximation in high dimensional signal spaces. In this paper, different learning scenario of one and two dimensional data series with different distributions are investigated for simulation and further noise is inputted to data distribution for making different disordered distribution in time series data and for evaluation of algorithm in locality prediction of nonlinearity. Then, the performance of this algorithm is simulated and also when the distribution of data is high or when the number of data is less the sensitivity of this approach to data distribution and influence of important parameter of local validity in this algorithm with different data distribution is explained.

Keywords: local nonlinear estimation, LWPR algorithm, online training method, locally weighted projection regression method

Procedia PDF Downloads 501
5268 Analysis of Diabetes Patients Using Pearson, Cost Optimization, Control Chart Methods

Authors: Devatha Kalyan Kumar, R. Poovarasan

Abstract:

In this paper, we have taken certain important factors and health parameters of diabetes patients especially among children by birth (pediatric congenital) where using the above three metrics methods we are going to assess the importance of each attributes in the dataset and thereby determining the most highly responsible and co-related attribute causing diabetics among young patients. We use cost optimization, control chart and Spearmen methodologies for the real-time application of finding the data efficiency in this diabetes dataset. The Spearmen methodology is the correlation methodologies used in software development process to identify the complexity between the various modules of the software. Identifying the complexity is important because if the complexity is higher, then there is a higher chance of occurrence of the risk in the software. With the use of control; chart mean, variance and standard deviation of data are calculated. With the use of Cost optimization model, we find to optimize the variables. Hence we choose the Spearmen, control chart and cost optimization methods to assess the data efficiency in diabetes datasets.

Keywords: correlation, congenital diabetics, linear relationship, monotonic function, ranking samples, pediatric

Procedia PDF Downloads 254
5267 A Hybrid Distributed Algorithm for Solving Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a distributed hybrid algorithm is proposed for solving the job shop scheduling problem. The suggested method executes different artificial neural networks, heuristics and meta-heuristics simultaneously on more than one machine. The neural networks are used to control the constraints of the problem while the meta-heuristics search the global space and the heuristics are used to prevent the premature convergence. To attain an efficient distributed intelligent method for solving big and distributed job shop scheduling problems, Apache Spark and Hadoop frameworks are used. In the algorithm implementation and design steps, new approaches are applied. Comparison between the proposed algorithm and other efficient algorithms from the literature shows its efficiency, which is able to solve large size problems in short time.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, neural network

Procedia PDF Downloads 386
5266 A Simulation Modeling Approach for Optimization of Storage Space Allocation in Container Terminal

Authors: Gamal Abd El-Nasser A. Said, El-Sayed M. El-Horbaty

Abstract:

Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space allocation in container terminal where 54% reduction in containers handling time in port is achieved.

Keywords: container terminal, discrete-event simulation, optimization, storage space allocation

Procedia PDF Downloads 323
5265 Robust Data Image Watermarking for Data Security

Authors: Harsh Vikram Singh, Ankur Rai, Anand Mohan

Abstract:

In this paper, we propose secure and robust data hiding algorithm based on DCT by Arnold transform and chaotic sequence. The watermark image is scrambled by Arnold cat map to increases its security and then the chaotic map is used for watermark signal spread in middle band of DCT coefficients of the cover image The chaotic map can be used as pseudo-random generator for digital data hiding, to increase security and robustness .Performance evaluation for robustness and imperceptibility of proposed algorithm has been made using bit error rate (BER), normalized correlation (NC), and peak signal to noise ratio (PSNR) value for different watermark and cover images such as Lena, Girl, Tank images and gain factor .We use a binary logo image and text image as watermark. The experimental results demonstrate that the proposed algorithm achieves higher security and robustness against JPEG compression as well as other attacks such as addition of noise, low pass filtering and cropping attacks compared to other existing algorithm using DCT coefficients. Moreover, to recover watermarks in proposed algorithm, there is no need to original cover image.

Keywords: data hiding, watermarking, DCT, chaotic sequence, arnold transforms

Procedia PDF Downloads 513
5264 Wait-Optimized Scheduler Algorithm for Efficient Process Scheduling in Computer Systems

Authors: Md Habibur Rahman, Jaeho Kim

Abstract:

Efficient process scheduling is a crucial factor in ensuring optimal system performance and resource utilization in computer systems. While various algorithms have been proposed over the years, there are still limitations to their effectiveness. This paper introduces a new Wait-Optimized Scheduler (WOS) algorithm that aims to minimize process waiting time by dividing them into two layers and considering both process time and waiting time. The WOS algorithm is non-preemptive and prioritizes processes with the shortest WOS. In the first layer, each process runs for a predetermined duration, and any unfinished process is subsequently moved to the second layer, resulting in a decrease in response time. Whenever the first layer is free or the number of processes in the second layer is twice that of the first layer, the algorithm sorts all the processes in the second layer based on their remaining time minus waiting time and sends one process to the first layer to run. This ensures that all processes eventually run, optimizing waiting time. To evaluate the performance of the WOS algorithm, we conducted experiments comparing its performance with traditional scheduling algorithms such as First-Come-First-Serve (FCFS) and Shortest-Job-First (SJF). The results showed that the WOS algorithm outperformed the traditional algorithms in reducing the waiting time of processes, particularly in scenarios with a large number of short tasks with long wait times. Our study highlights the effectiveness of the WOS algorithm in improving process scheduling efficiency in computer systems. By reducing process waiting time, the WOS algorithm can improve system performance and resource utilization. The findings of this study provide valuable insights for researchers and practitioners in developing and implementing efficient process scheduling algorithms.

Keywords: process scheduling, wait-optimized scheduler, response time, non-preemptive, waiting time, traditional scheduling algorithms, first-come-first-serve, shortest-job-first, system performance, resource utilization

Procedia PDF Downloads 90
5263 Optimization of Agricultural Water Demand Using a Hybrid Model of Dynamic Programming and Neural Networks: A Case Study of Algeria

Authors: M. Boudjerda, B. Touaibia, M. K. Mihoubi

Abstract:

In Algeria agricultural irrigation is the primary water consuming sector followed by the domestic and industrial sectors. Economic development in the last decade has weighed heavily on water resources which are relatively limited and gradually decreasing to the detriment of agriculture. The research presented in this paper focuses on the optimization of irrigation water demand. Dynamic Programming-Neural Network (DPNN) method is applied to investigate reservoir optimization. The optimal operation rule is formulated to minimize the gap between water release and water irrigation demand. As a case study, Foum El-Gherza dam’s reservoir system in south of Algeria has been selected to examine our proposed optimization model. The application of DPNN method allowed increasing the satisfaction rate (SR) from 12.32% to 55%. In addition, the operation rule generated showed more reliable and resilience operation for the examined case study.

Keywords: water management, agricultural demand, dam and reservoir operation, Foum el-Gherza dam, dynamic programming, artificial neural network

Procedia PDF Downloads 114
5262 Multiple Query Optimization in Wireless Sensor Networks Using Data Correlation

Authors: Elaheh Vaezpour

Abstract:

Data sensing in wireless sensor networks is done by query deceleration the network by the users. In many applications of the wireless sensor networks, many users send queries to the network simultaneously. If the queries are processed separately, the network’s energy consumption will increase significantly. Therefore, it is very important to aggregate the queries before sending them to the network. In this paper, we propose a multiple query optimization framework based on sensors physical and temporal correlation. In the proposed method, queries are merged and sent to network by considering correlation among the sensors in order to reduce the communication cost between the sensors and the base station.

Keywords: wireless sensor networks, multiple query optimization, data correlation, reducing energy consumption

Procedia PDF Downloads 334
5261 An Efficient Strategy for Relay Selection in Multi-Hop Communication

Authors: Jung-In Baik, Seung-Jun Yu, Young-Min Ko, Hyoung-Kyu Song

Abstract:

This paper proposes an efficient relaying algorithm to obtain diversity for improving the reliability of a signal. The algorithm achieves time or space diversity gain by multiple versions of the same signal through two routes. Relays are separated between a source and destination. The routes between the source and destination are set adaptive in order to deal with different channels and noises. The routes consist of one or more relays and the source transmits its signal to the destination through the routes. The signals from the relays are combined and detected at the destination. The proposed algorithm provides a better performance than the conventional algorithms in bit error rate (BER).

Keywords: multi-hop, OFDM, relay, relaying selection

Procedia PDF Downloads 443
5260 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 366
5259 An Integration of Life Cycle Assessment and Techno-Economic Optimization in the Supply Chains

Authors: Yohanes Kristianto

Abstract:

The objective of this paper is to compose a sustainable supply chain that integrates product, process and networks design. An integrated life cycle assessment and techno-economic optimization is proposed that might deliver more economically feasible operations, minimizes environmental impacts and maximizes social contributions. Closed loop economy of the supply chain is achieved by reusing waste to be raw material of final products. Societal benefit is given by the supply chain by absorbing waste as source of raw material and opening new work opportunities. A case study of ethanol supply chain from rice straws is considered. The modeling results show that optimization within the scope of LCA is capable of minimizing both CO₂ emissions and energy and utility consumptions and thus enhancing raw materials utilization. Furthermore, the supply chain is capable of contributing to local economy through jobs creation. While the model is quite comprehensive, the future research recommendation on energy integration and global sustainability is proposed.

Keywords: life cycle assessment, techno-economic optimization, sustainable supply chains, closed loop economy

Procedia PDF Downloads 149
5258 Desing of PSS and SVC to Improve Power System Stability

Authors: Mahmoud Samkan

Abstract:

In this paper, the design and assessment of new coordination between Power System Stabilizers (PSSs) and Static Var Compensator (SVC) in a multimachine power system via statistical method are proposed. The coordinated design problem of PSSs and SVC over a wide range of loading conditions is handled as an optimization problem. The Bacterial Swarming Optimization (BSO), which synergistically couples the Bacterial Foraging (BF) with the Particle Swarm Optimization (PSO), is employed to seek for optimal controllers parameters. By minimizing the proposed objective function, in which the speed deviations between generators are involved; stability performance of the system is enhanced. To compare the capability of PSS and SVC, both are designed independently, and then in a coordinated manner. Simultaneous tuning of the BSO based coordinated controller gives robust damping performance over wide range of operating conditions and large disturbance in compare to optimized PSS controller based on BSO (BSOPSS) and optimized SVC controller based on BSO (BSOSVC). Moreover, a statistical T test is executed to validate the robustness of coordinated controller versus uncoordinated one.

Keywords: SVC, PSSs, multimachine power system, coordinated design, bacteria swarm optimization, statistical assessment

Procedia PDF Downloads 374
5257 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.

Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation

Procedia PDF Downloads 69
5256 Optimization Studies on Biosorption of Ni(II) and Cd(II) from Wastewater Using Pseudomonas putida in a Packed Bed Bioreactor

Authors: K.Narasimhulu, Y. Pydi Setty

Abstract:

The objective of this present study is the optimization of process parameters in biosorption of Ni(II) and Cd(II) ions by Pseudomonas putida using Response Surface Methodology in a Packed bed bioreactor. The experimental data were also tested with theoretical models to find the best fit model. The present paper elucidates RSM as an efficient approach for predictive model building and optimization of Ni(II) and Cd(II) ions using Pseudomonas putida. In packed bed biosorption studies, comparison of the breakthrough curves of Ni(II) and Cd(II) for Agar immobilized and PAA immobilized Pseudomonas putida at optimum conditions of flow rate of 300 mL/h, initial metal ion concentration of 100 mg/L and bed height of 20 cm with weight of biosorbent of 12 g, it was found that the Agar immobilized Pseudomonas putida showed maximum percent biosorption and bed saturation occurred at 20 minutes. Optimization results of Ni(II) and Cd(II) by Pseudomonas putida from the Design Expert software were obtained as bed height of 19.93 cm, initial metal ion concentration of 103.85 mg/L, and flow rate of 310.57 mL/h. The percent biosorption of Ni(II) and Cd(II) is 87.2% and 88.2% respectively. The predicted optimized parameters are in agreement with the experimental results.

Keywords: packed bed bioreactor, response surface mthodology, pseudomonas putida, biosorption, waste water

Procedia PDF Downloads 451
5255 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing

Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah

Abstract:

The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.

Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing

Procedia PDF Downloads 426
5254 Novel Algorithm for Restoration of Retina Images

Authors: P. Subbuthai, S. Muruganand

Abstract:

Diabetic Retinopathy is one of the complicated diseases and it is caused by the changes in the blood vessels of the retina. Extraction of retina image through Fundus camera sometimes produced poor contrast and noises. Because of this noise, detection of blood vessels in the retina is very complicated. So preprocessing is needed, in this paper, a novel algorithm is implemented to remove the noisy pixel in the retina image. The proposed algorithm is Extended Median Filter and it is applied to the green channel of the retina because green channel vessels are brighter than the background. Proposed extended median filter is compared with the existing standard median filter by performance metrics such as PSNR, MSE and RMSE. Experimental results show that the proposed Extended Median Filter algorithm gives a better result than the existing standard median filter in terms of noise suppression and detail preservation.

Keywords: fundus retina image, diabetic retinopathy, median filter, microaneurysms, exudates

Procedia PDF Downloads 340
5253 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 185
5252 Two-stage Robust Optimization for Collaborative Distribution Network Design Under Uncertainty

Authors: Reza Alikhani

Abstract:

This research focuses on the establishment of horizontal cooperation among companies to enhance their operational efficiency and competitiveness. The study proposes an approach to horizontal collaboration, called coalition configuration, which involves partnering companies sharing distribution centers in a network design problem. The paper investigates which coalition should be formed in each distribution center to minimize the total cost of the network. Moreover, potential uncertainties, such as operational and disruption risks, are considered during the collaborative design phase. To address this problem, a two-stage robust optimization model for collaborative distribution network design under surging demand and facility disruptions is presented, along with a column-and-constraint generation algorithm to obtain exact solutions tailored to the proposed formulation. Extensive numerical experiments are conducted to analyze solutions obtained by the model in various scenarios, including decisions ranging from fully centralized to fully decentralized settings, collaborative versus non-collaborative approaches, and different amounts of uncertainty budgets. The results show that the coalition formation mechanism proposes some solutions that are competitive with the savings of the grand coalition. The research also highlights that collaboration increases network flexibility and resilience while reducing costs associated with demand and capacity uncertainties.

Keywords: logistics, warehouse sharing, robust facility location, collaboration for resilience

Procedia PDF Downloads 68