Search results for: fruit fly optimization algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6694

Search results for: fruit fly optimization algorithm

4894 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 374
4893 Preparation of Chemically Activated Carbon from Waste Tire Char for Lead Ions Adsorption and Optimization Using Response Surface Methodology

Authors: Lucky Malise, Hilary Rutto, Tumisang Seodigeng

Abstract:

The use of tires in automobiles is very important in the automobile industry. However, there is a serious environmental problem concerning the disposal of these rubber tires once they become worn out. The main aim of this study was to prepare activated carbon from waste tire pyrolysis char by impregnating KOH on pyrolytic char. Adsorption studies on lead onto chemically activated carbon was carried out using response surface methodology. The effect of process parameters such as temperature (°C), adsorbent dosage (g/1000ml), pH, contact time (minutes) and initial lead concentration (mg/l) on the adsorption capacity were investigated. It was found that the adsorption capacity increases with an increase in contact time, pH, temperature and decreases with an increase in lead concentration. Optimization of the process variables was done using a numerical optimization method. Fourier Transform Infrared Spectra (FTIR) analysis, XRay diffraction (XRD), Thermogravimetric analysis (TGA) and scanning electron microscope was used to characterize the pyrolytic carbon char before and after activation. The optimum points 1g/ 100 ml for adsorbent dosage, 7 for pH value of the solution, 115.2 min for contact time, 100 mg/l for initial metal concentration, and 25°C for temperature were obtained to achieve the highest adsorption capacity of 93.176 mg/g with a desirability of 0.994. Fourier Transform Infrared Spectra (FTIR) analysis and Thermogravimetric analysis (TGA) show the presence of oxygen-containing functional groups on the surface of the activated carbon produced and that the weight loss taking place during the activation step is small.

Keywords: waste tire pyrolysis char, chemical activation, central composite design (CCD), adsorption capacity, numerical optimization

Procedia PDF Downloads 224
4892 Control of a Quadcopter Using Genetic Algorithm Methods

Authors: Mostafa Mjahed

Abstract:

This paper concerns the control of a nonlinear system using two different methods, reference model and genetic algorithm. The quadcopter is a nonlinear unstable system, which is a part of aerial robots. It is constituted by four rotors placed at the end of a cross. The center of this cross is occupied by the control circuit. Its motions are governed by six degrees of freedom: three rotations around 3 axes (roll, pitch and yaw) and the three spatial translations. The control of such system is complex, because of nonlinearity of its dynamic representation and the number of parameters, which it involves. Numerous studies have been developed to model and stabilize such systems. The classical PID and LQ correction methods are widely used. If the latter represent the advantage to be simple because they are linear, they reveal the drawback to require the presence of a linear model to synthesize. It also implies the complexity of the established laws of command because the latter must be widened on all the domain of flight of these quadcopter. Note that, if the classical design methods are widely used to control aeronautical systems, the Artificial Intelligence methods as genetic algorithms technique receives little attention. In this paper, we suggest comparing two PID design methods. Firstly, the parameters of the PID are calculated according to the reference model. In a second phase, these parameters are established using genetic algorithms. By reference model, we mean that the corrected system behaves according to a reference system, imposed by some specifications: settling time, zero overshoot etc. Inspired from the natural evolution of Darwin's theory advocating the survival of the best, John Holland developed this evolutionary algorithm. Genetic algorithm (GA) possesses three basic operators: selection, crossover and mutation. We start iterations with an initial population. Each member of this population is evaluated through a fitness function. Our purpose is to correct the behavior of the quadcopter around three axes (roll, pitch and yaw) with 3 PD controllers. For the altitude, we adopt a PID controller.

Keywords: quadcopter, genetic algorithm, PID, fitness, model, control, nonlinear system

Procedia PDF Downloads 431
4891 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms

Authors: Imad Zeyad Ramadan

Abstract:

In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).

Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method

Procedia PDF Downloads 445
4890 Design and Analysis of Solar Powered Plane

Authors: Malarvizhi, Venkatesan

Abstract:

This paper summarizes about the design and optimization of solar powered unmanned aerial vehicle. The purpose of this research is to increase the range and endurance. It can be used for environmental research, aerial photography, search and rescue mission and surveillance in other planets. The ultimate aim of this research is to design and analyze the solar powered plane in order to detect lift, drag and other parameters by using cfd analysis. Similarly the numerical investigation has been done to compare the results of earth’s atmosphere to the mars atmosphere. This is the approach made to check whether the solar powered plane is possible to glide in the planet mars by using renewable energy (i.e., solar energy).

Keywords: optimization, range, endurance, surveillance, lift and drag parameters

Procedia PDF Downloads 458
4889 Optimization and Feasibility Analysis of a PV/Wind/ Battery Hybrid Energy Conversion

Authors: Doaa M. Atia, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassan T. Dorra

Abstract:

In this paper, the optimum design for renewable energy system powered an aquaculture pond was determined. Hybrid Optimization Model for Electric Renewable (HOMER) software program, which is developed by U.S National Renewable Energy Laboratory (NREL), is used for analyzing the feasibility of the stand-alone and hybrid system in this study. HOMER program determines whether renewable energy resources satisfy hourly electric demand or not. The program calculates energy balance for every 8760 hours in a year to simulate operation of the system. This optimization compares the demand for the electrical energy for each hour of the year with the energy supplied by the system for that hour and calculates the relevant energy flow for each component in the model. The essential principle is to minimize the total system cost while HOMER ensures control of the system. Moreover the feasibility analysis of the energy system is also studied. Wind speed, solar irradiance, interest rate and capacity shortage are the parameters which are taken into consideration. The simulation results indicate that the hybrid system is the best choice in this study, yielding lower net present cost. Thus, it provides higher system performance than PV or wind stand-alone systems.

Keywords: wind stand-alone system, photovoltaic stand-alone system, hybrid system, optimum system sizing, feasibility, cost analysis

Procedia PDF Downloads 338
4888 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink

Authors: Sanjay Rathee, Arti Kashyap

Abstract:

Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.

Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining

Procedia PDF Downloads 294
4887 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand

Authors: Neeta Kumari, Gopal Pathak

Abstract:

Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.

Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination

Procedia PDF Downloads 548
4886 Probabilistic Gathering of Agents with Simple Sensors: Distributed Algorithm for Aggregation of Robots Equipped with Binary On-Board Detectors

Authors: Ariel Barel, Rotem Manor, Alfred M. Bruckstein

Abstract:

We present a probabilistic gathering algorithm for agents that can only detect the presence of other agents in front of or behind them. The agents act in the plane and are identical and indistinguishable, oblivious, and lack any means of direct communication. They do not have a common frame of reference in the plane and choose their orientation (direction of possible motion) at random. The analysis of the gathering process assumes that the agents act synchronously in selecting random orientations that remain fixed during each unit time-interval. Two algorithms are discussed. The first one assumes discrete jumps based on the sensing results given the randomly selected motion direction, and in this case, extensive experimental results exhibit probabilistic clustering into a circular region with radius equal to the step-size in time proportional to the number of agents. The second algorithm assumes agents with continuous sensing and motion, and in this case, we can prove gathering into a very small circular region in finite expected time.

Keywords: control, decentralized, gathering, multi-agent, simple sensors

Procedia PDF Downloads 162
4885 A Fully Interpretable Deep Reinforcement Learning-Based Motion Control for Legged Robots

Authors: Haodong Huang, Zida Zhao, Shilong Sun, Chiyao Li, Wenfu Xu

Abstract:

The control methods for legged robots based on deep reinforcement learning have seen widespread application; however, the inherent black-box nature of neural networks presents challenges in understanding the decision-making motives of the robots. To address this issue, we propose a fully interpretable deep reinforcement learning training method to elucidate the underlying principles of legged robot motion. We incorporate the dynamics of legged robots into the policy, where observations serve as inputs and actions as outputs of the dynamics model. By embedding the dynamics equations within the multi-layer perceptron (MLP) computation process and making the parameters trainable, we enhance interpretability. Additionally, Bayesian optimization is introduced to train these parameters. We validate the proposed fully interpretable motion control algorithm on a legged robot, opening new research avenues for motion control and learning algorithms for legged robots within the deep learning framework.

Keywords: deep reinforcement learning, interpretation, motion control, legged robots

Procedia PDF Downloads 19
4884 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: rough-sets, classification, feature selection, entropy, outliers, frequent itemset mining

Procedia PDF Downloads 435
4883 Seamless Mobility in Heterogeneous Mobile Networks

Authors: Mohab Magdy Mostafa Mohamed

Abstract:

The objective of this paper is to introduce a vertical handover (VHO) algorithm between wireless LANs (WLANs) and LTE mobile networks. The proposed algorithm is based on the fuzzy control theory and takes into consideration power level, subscriber velocity, and target cell load instead of only power level in traditional algorithms. Simulation results show that network performance in terms of number of handovers and handover occurrence distance is improved.

Keywords: vertical handover, fuzzy control theory, power level, speed, target cell load

Procedia PDF Downloads 351
4882 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum

Authors: Abdulrahman Sumayli, Saad M. AlShahrani

Abstract:

For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectively

Keywords: temperature, pressure variations, machine learning, oil treatment

Procedia PDF Downloads 67
4881 Supplier Selection and Order Allocation Using a Stochastic Multi-Objective Programming Model and Genetic Algorithm

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

In this paper, we develop a supplier selection and order allocation multi-objective model in stochastic environment in which purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. To do so, we use dependent chance programming (DCP) that maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. After transforming the above mentioned stochastic multi-objective programming problem into a stochastic single objective problem using minimum deviation method, we apply a genetic algorithm to get the later single objective problem solved. The employed genetic algorithm performs a simulation process in order to calculate the stochastic objective function as its fitness function. At the end, we explore the impact of stochastic parameters on the given solution via a sensitivity analysis exploiting coefficient of variation. The results show that as stochastic parameters have greater coefficients of variation, the value of objective function in the stochastic single objective programming problem is worsened.

Keywords: dependent chance programming, genetic algorithm, minimum deviation method, order allocation, supplier selection

Procedia PDF Downloads 254
4880 A Review on Control of a Grid Connected Permanent Magnet Synchronous Generator Based Variable Speed Wind Turbine

Authors: Eman M. Eissa, Hany M. Hasanin, Mahmoud Abd-Elhamid, S. M. Muyeen, T. Fernando, H. H. C. Iu

Abstract:

Among all available wind energy conversion systems (WECS), the direct driven permanent magnet synchronous generator integrated with power electronic interfaces is becoming popular due to its capability of extracting optimal energy capture, reduced mechanical stresses, no need to external excitation current, meaning less losses, and more compact size. Simple structure, low maintenance cost; and its decoupling control performance is much less sensitive to the parameter variations of the generator. This paper attempts to present a review of the control and optimization strategies of WECS based on permanent magnet synchronous generator (PMSG) and overview the most recent research trends in this field. The main aims of this review include; the generalized overall WECS starting from turbines, generators, and control strategies including converters, maximum power point tracking (MPPT), ending with DC-link control. The optimization methods of the controller parameters necessary to guarantee the operation of the system efficiently and safely, especially when connected to the power grid are also presented.

Keywords: control and optimization techniques, permanent magnet synchronous generator, variable speed wind turbines, wind energy conversion system

Procedia PDF Downloads 221
4879 Evolved Bat Algorithm Based Adaptive Fuzzy Sliding Mode Control with LMI Criterion

Authors: P.-W. Tsai, C.-Y. Chen, C.-W. Chen

Abstract:

In this paper, the stability analysis of a GA-Based adaptive fuzzy sliding model controller for a nonlinear system is discussed. First, a nonlinear plant is well-approximated and described with a reference model and a fuzzy model, both involving FLC rules. Then, FLC rules and the consequent parameter are decided on via an Evolved Bat Algorithm (EBA). After this, we guarantee a new tracking performance inequality for the control system. The tracking problem is characterized to solve an eigenvalue problem (EVP). Next, an adaptive fuzzy sliding model controller (AFSMC) is proposed to stabilize the system so as to achieve good control performance. Lyapunov’s direct method can be used to ensure the stability of the nonlinear system. It is shown that the stability analysis can reduce nonlinear systems into a linear matrix inequality (LMI) problem. Finally, a numerical simulation is provided to demonstrate the control methodology.

Keywords: adaptive fuzzy sliding mode control, Lyapunov direct method, swarm intelligence, evolved bat algorithm

Procedia PDF Downloads 444
4878 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 32
4877 Configuration Design and Optimization of the Movable Leg-Foot Lunar Soft-Landing Device

Authors: Shan Jia, Jinbao Chen, Jinhua Zhou, Jiacheng Qian

Abstract:

Lunar exploration is a necessary foundation for deep-space exploration. For the functional limitations of the fixed landers which are widely used currently and are to expand the detection range by the use of wheeled rovers with unavoidable path-repeatability, a movable lunar soft-landing device based on cantilever type buffer mechanism and leg-foot type walking mechanism is presented. Firstly, a 20 DoFs quadruped configuration based on pushrod is proposed. The configuration is of the bionic characteristics such as hip, knee and ankle joints, and can make the kinematics of the whole mechanism unchanged before and after buffering. Secondly, the multi-function main/auxiliary buffers based on crumple-energy absorption and screw-nut mechanism, as well as the telescopic device which could be used to protect the plantar force sensors during the buffer process are designed. Finally, the kinematic model of the whole mechanism is established, and the configuration optimization of the whole mechanism is completed based on the performance requirements of slope adaptation and obstacle crossing. This research can provide a technical solution integrating soft-landing, large-scale inspection and material-transfer for future lunar exploration and even mars exploration, and can also serve as the technical basis for developing the reusable landers.

Keywords: configuration design, lunar soft-landing device, movable, optimization

Procedia PDF Downloads 156
4876 Optimal Design of Friction Dampers for Seismic Retrofit of a Moment Frame

Authors: Hyungoo Kang, Jinkoo Kim

Abstract:

This study investigated the determination of the optimal location and friction force of friction dampers to effectively reduce the seismic response of a reinforced concrete structure designed without considering seismic load. To this end, the genetic algorithm process was applied and the results were compared with those obtained by simplified methods such as distribution of dampers based on the story shear or the inter-story drift ratio. The seismic performance of the model structure with optimally positioned friction dampers was evaluated by nonlinear static and dynamic analyses. The analysis results showed that compared with the system without friction dampers, the maximum roof displacement and the inter-story drift ratio were reduced by about 30% and 40%, respectively. After installation of the dampers about 70% of the earthquake input energy was dissipated by the dampers and the energy dissipated in the structural elements was reduced by about 50%. In comparison with the simplified methods of installation, the genetic algorithm provided more efficient solutions for seismic retrofit of the model structure.

Keywords: friction dampers, genetic algorithm, optimal design, RC buildings

Procedia PDF Downloads 243
4875 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation

Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu

Abstract:

Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.

Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator

Procedia PDF Downloads 255
4874 Optimization of Fourth Order Discrete-Approximation Inclusions

Authors: Elimhan N. Mahmudov

Abstract:

The paper concerns the necessary and sufficient conditions of optimality for Cauchy problem of fourth order discrete (PD) and discrete-approximate (PDA) inclusions. The main problem is formulation of the fourth order adjoint discrete and discrete-approximate inclusions and transversality conditions, which are peculiar to problems including fourth order derivatives and approximate derivatives. Thus the necessary and sufficient conditions of optimality are obtained incorporating the Euler-Lagrange and Hamiltonian forms of inclusions. Derivation of optimality conditions are based on the apparatus of locally adjoint mapping (LAM). Moreover in the application of these results we consider the fourth order linear discrete and discrete-approximate inclusions.

Keywords: difference, optimization, fourth, approximation, transversality

Procedia PDF Downloads 374
4873 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach

Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat

Abstract:

A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.

Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings

Procedia PDF Downloads 130
4872 Non-Population Search Algorithms for Capacitated Material Requirement Planning in Multi-Stage Assembly Flow Shop with Alternative Machines

Authors: Watcharapan Sukkerd, Teeradej Wuttipornpun

Abstract:

This paper aims to present non-population search algorithms called tabu search (TS), simulated annealing (SA) and variable neighborhood search (VNS) to minimize the total cost of capacitated MRP problem in multi-stage assembly flow shop with two alternative machines. There are three main steps for the algorithm. Firstly, an initial sequence of orders is constructed by a simple due date-based dispatching rule. Secondly, the sequence of orders is repeatedly improved to reduce the total cost by applying TS, SA and VNS separately. Finally, the total cost is further reduced by optimizing the start time of each operation using the linear programming (LP) model. Parameters of the algorithm are tuned by using real data from automotive companies. The result shows that VNS significantly outperforms TS, SA and the existing algorithm.

Keywords: capacitated MRP, tabu search, simulated annealing, variable neighborhood search, linear programming, assembly flow shop, application in industry

Procedia PDF Downloads 232
4871 Evaluation of Dual Polarization Rainfall Estimation Algorithm Applicability in Korea: A Case Study on Biseulsan Radar

Authors: Chulsang Yoo, Gildo Kim

Abstract:

Dual polarization radar provides comprehensive information about rainfall by measuring multiple parameters. In Korea, for the rainfall estimation, JPOLE and CSU-HIDRO algorithms are generally used. This study evaluated the local applicability of JPOLE and CSU-HIDRO algorithms in Korea by using the observed rainfall data collected on August, 2014 by the Biseulsan dual polarization radar data and KMA AWS. A total of 11,372 pairs of radar-ground rain rate data were classified according to thresholds of synthetic algorithms into suitable and unsuitable data. Then, evaluation criteria were derived by comparing radar rain rate and ground rain rate, respectively, for entire, suitable, unsuitable data. The results are as follows: (1) The radar rain rate equation including KDP, was found better in the rainfall estimation than the other equations for both JPOLE and CSU-HIDRO algorithms. The thresholds were found to be adequately applied for both algorithms including specific differential phase. (2) The radar rain rate equation including horizontal reflectivity and differential reflectivity were found poor compared to the others. The result was not improved even when only the suitable data were applied. Acknowledgments: This work was supported by the Basic Science Research Program through the National Research Foundation of Korea, funded by the Ministry of Education (NRF-2013R1A1A2011012).

Keywords: CSU-HIDRO algorithm, dual polarization radar, JPOLE algorithm, radar rainfall estimation algorithm

Procedia PDF Downloads 210
4870 Riesz Mixture Model for Brain Tumor Detection

Authors: Mouna Zitouni, Mariem Tounsi

Abstract:

This research introduces an application of the Riesz mixture model for medical image segmentation for accurate diagnosis and treatment of brain tumors. We propose a pixel classification technique based on the Riesz distribution, derived from an extended Bartlett decomposition. To our knowledge, this is the first study addressing this approach. The Expectation-Maximization algorithm is implemented for parameter estimation. A comparative analysis, using both synthetic and real brain images, demonstrates the superiority of the Riesz model over a recent method based on the Wishart distribution.

Keywords: EM algorithm, segmentation, Riesz probability distribution, Wishart probability distribution

Procedia PDF Downloads 16
4869 Design and Fabrication of Stiffness Reduced Metallic Locking Compression Plates through Topology Optimization and Additive Manufacturing

Authors: Abdulsalam A. Al-Tamimi, Chris Peach, Paulo Rui Fernandes, Paulo J. Bartolo

Abstract:

Bone fixation implants currently used to treat traumatic fractured bones and to promote fracture healing are built with biocompatible metallic materials such as stainless steel, cobalt chromium and titanium and its alloys (e.g., CoCrMo and Ti6Al4V). The noticeable stiffness mismatch between current metallic implants and host bone associates with negative outcomes such as stress shielding which causes bone loss and implant loosening leading to deficient fracture treatment. This paper, part of a major research program to design the next generation of bone fixation implants, describes the combined use of three-dimensional (3D) topology optimization (TO) and additive manufacturing powder bed technology (Electron Beam Melting) to redesign and fabricate the plates based on the current standard one (i.e., locking compression plate). Topology optimization is applied with an objective function to maximize the stiffness and constraint by volume reductions (i.e., 25-75%) in order to obtain optimized implant designs with reduced stress shielding phenomenon, under different boundary conditions (i.e., tension, bending, torsion and combined loads). The stiffness of the original and optimised plates are assessed through a finite-element study. The TO results showed actual reduction in the stiffness for most of the plates due to the critical values of volume reduction. Additionally, the optimized plates fabricated using powder bed techniques proved that the integration between the TO and additive manufacturing presents the capability of producing stiff reduced plates with acceptable tolerances.

Keywords: additive manufacturing, locking compression plate, finite element, topology optimization

Procedia PDF Downloads 196
4868 Deterministic Random Number Generator Algorithm for Cryptosystem Keys

Authors: Adi A. Maaita, Hamza A. A. Al Sewadi

Abstract:

One of the crucial parameters of digital cryptographic systems is the selection of the keys used and their distribution. The randomness of the keys has a strong impact on the system’s security strength being difficult to be predicted, guessed, reproduced or discovered by a cryptanalyst. Therefore, adequate key randomness generation is still sought for the benefit of stronger cryptosystems. This paper suggests an algorithm designed to generate and test pseudo random number sequences intended for cryptographic applications. This algorithm is based on mathematically manipulating a publically agreed upon information between sender and receiver over a public channel. This information is used as a seed for performing some mathematical functions in order to generate a sequence of pseudorandom numbers that will be used for encryption/decryption purposes. This manipulation involves permutations and substitutions that fulfills Shannon’s principle of “confusion and diffusion”. ASCII code characters wereutilized in the generation process instead of using bit strings initially, which adds more flexibility in testing different seed values. Finally, the obtained results would indicate sound difficulty of guessing keys by attackers.

Keywords: cryptosystems, information security agreement, key distribution, random numbers

Procedia PDF Downloads 268
4867 Study of the Best Algorithm to Estimate Sunshine Duration from Global Radiation on Horizontal Surface for Tropical Region

Authors: Tovondahiniriko Fanjirindratovo, Olga Ramiarinjanahary, Paulisimone Rasoavonjy

Abstract:

The sunshine duration, which is the sum of all the moments when the solar beam radiation is up to a minimal value, is an important parameter for climatology, tourism, agriculture and solar energy. Its measure is usually given by a pyrheliometer installed on a two-axis solar tracker. Due to the high cost of this device and the availability of global radiation on a horizontal surface, on the other hand, several studies have been done to make a correlation between global radiation and sunshine duration. Most of these studies are fitted for the northern hemisphere using a pyrheliometric database. The aim of the present work is to list and assess all the existing methods and apply them to Reunion Island, a tropical region in the southern hemisphere. Using a database of ten years, global, diffuse and beam radiation for a horizontal surface are employed in order to evaluate the uncertainty of existing algorithms for a tropical region. The methodology is based on indirect comparison because the solar beam radiation is not measured but calculated by the beam radiation on a horizontal surface and the sun elevation angle.

Keywords: Carpentras method, data fitting, global radiation, sunshine duration, Slob and Monna algorithm, step algorithm

Procedia PDF Downloads 123
4866 Medium Design and Optimization for High Β-Galactosidase Producing Microbial Strains from Dairy Waste through Fermentation

Authors: Ashish Shukla, K. P. Mishra, Pushplata Tripathi

Abstract:

This paper investigates the production and optimization of β-galactosidase enzyme using synthetic medium by isolated wild strains (S1, S2) mutated strains (M1, M2) through SSF and SmF. Among the different cell disintegration methods used, the highest specific activity was obtained when the cells were permeabilized using isoamyl alcohol. Wet lab experiments were performed to investigate the effects of carbon and nitrogen substrates present in Vogel’s medium on β-galactosidase enzyme activity using S1, S2, and M1, M2 strains through SSF. SmF experiments were performed for effects of carbon and nitrogen sources in YLK2Mg medium on β-galactosidase enzyme activity using S1, S2 and M1, M2 strains. Effect of pH on β-galactosidase enzyme production was also done using S1, S2, and M1, M2 strains. Results were found to be very appreciable in all the cases.

Keywords: β-galactosidase, cell disintegration, permeabilized, SSF, SmF

Procedia PDF Downloads 269
4865 Optimization of a Method of Total RNA Extraction from Mentha piperita

Authors: Soheila Afkar

Abstract:

Mentha piperita is a medicinal plant that contains a large amount of secondary metabolite that has adverse effect on RNA extraction. Since high quality of RNA is the first step to real time-PCR, in this study optimization of total RNA isolation from leaf tissues of Mentha piperita was evaluated. From this point of view, we researched two different total RNA extraction methods on leaves of Mentha piperita to find the best one that contributes the high quality. The methods tested are RNX-plus, modified RNX-plus (1-5 numbers). RNA quality was analyzed by agarose gel 1.5%. The RNA integrity was also assessed by visualization of ribosomal RNA bands on 1.5% agarose gels. In the modified RNX-plus method (number 2), the integrity of 28S and 18S rRNA was highly satisfactory when analyzed in agarose denaturing gel, so this method is suitable for RNA isolation from Mentha piperita.

Keywords: Mentha piperita, polyphenol, polysaccharide, RNA extraction

Procedia PDF Downloads 189