Search results for: optimization algorithms
3650 Updating Stochastic Hosting Capacity Algorithm for Voltage Optimization Programs and Interconnect Standards
Authors: Nicholas Burica, Nina Selak
Abstract:
The ADHCAT (Automated Distribution Hosting Capacity Assessment Tool) was designed to run Hosting Capacity Analysis on the ComEd system via a stochastic DER (Distributed Energy Resource) placement on multiple power flow simulations against a set of violation criteria. The violation criteria in the initial version of the tool captured a limited amount of issues that individual departments design against for DER interconnections. Enhancements were made to the tool to further align with individual department violation and operation criteria, as well as the addition of new modules for use for future load profile analysis. A reporting engine was created for future analytical use based on the simulations and observations in the tool.Keywords: distributed energy resources, hosting capacity, interconnect, voltage optimization
Procedia PDF Downloads 1803649 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1173648 Design Optimization and Thermoacoustic Analysis of Pulse Tube Cryocooler Components
Authors: K. Aravinth, C. T. Vignesh
Abstract:
The usage of pulse tube cryocoolers is significantly increased mainly due to the advantage of the absence of moving parts. The underlying idea of this project is to optimize the design of pulse tube, regenerator, a resonator in cryocooler and analyzing the thermo-acoustic oscillations with respect to the design parameters. Computational Fluid Dynamic (CFD) model with time-dependent validation is done to predict its performance. The continuity, momentum, and energy equations are solved for various porous media regions. The effect of changing the geometries and orientation will be validated and investigated in performance. The pressure, temperature and velocity fields in the regenerator and pulse tube are evaluated. This optimized design performance results will be compared with the existing pulse tube cryocooler design. The sinusoidal behavior of cryocooler in acoustic streaming patterns in pulse tube cryocooler will also be evaluated.Keywords: acoustics, cryogenics, design, optimization
Procedia PDF Downloads 1623647 Optimization of Flexible Job Shop Scheduling Problem with Sequence-Dependent Setup Times Using Genetic Algorithm Approach
Authors: Sanjay Kumar Parjapati, Ajai Jain
Abstract:
This paper presents optimization of makespan for ‘n’ jobs and ‘m’ machines flexible job shop scheduling problem with sequence dependent setup time using genetic algorithm (GA) approach. A restart scheme has also been applied to prevent the premature convergence. Two case studies are taken into consideration. Results are obtained by considering crossover probability (pc = 0.85) and mutation probability (pm = 0.15). Five simulation runs for each case study are taken and minimum value among them is taken as optimal makespan. Results indicate that optimal makespan can be achieved with more than one sequence of jobs in a production order.Keywords: flexible job shop, genetic algorithm, makespan, sequence dependent setup times
Procedia PDF Downloads 3213646 Optimization of Black Grass Jelly Formulation to Reduce Leaching and Increase Floating Rate
Authors: M. M. Nor, H. I. Sheikh, M. F. H. Hassan, S. Mokhtar, A. Suganthi, A. Fadhlina
Abstract:
Black grass jelly (BGJ) is a popular black jelly used in preparing various drinks and desserts. Food industries often use preservatives to maintain the physicochemical properties of foods, such as color and texture. These preservatives (e.g., phosphoric acid) are linked with deleterious health effects such as kidney disease. Using gelling agents, carrageenan, and gelatin to make BGJ could improve its physiochemical and textural properties. This study was designed to optimize BGJ-selected physicochemical and textural properties using carrageenan and gelatin. Various black grass jelly formulations (BGJF) were designed using an I-optimal mixture design in Design Expert® software. Data from commercial BGJ were used as a reference during the optimization process. The combination of carrageenan and gelatin added to the formulations was up to 14.38g (~5%), respectively. The results showed that adding 2.5g carrageenan and 2.5g gelatin at approximately 5g (~5%) effectively maintained most of the physiochemical properties with an overall desirability function of 0.81. This formulation was selected as the optimum black grass jelly formulation (OBGJF). The leaching properties and floating duration were measured on the OBGJF and commercial grass jelly for 20 min and 40 min, respectively. The results indicated that OBGJF showed significantly (p<0.0001) lower leaching rate and floating time (p<0.05). Hence, further optimization is needed to increase the floating duration of carrageenan and gelatin-based BGJ.Keywords: cincau, Mesona chinensis, black grass jelly, carrageenan, gelatin
Procedia PDF Downloads 743645 Iterative Replanning of Diesel Generator and Energy Storage System for Stable Operation of an Isolated Microgrid
Authors: Jiin Jeong, Taekwang Kim, Kwang Ryel Ryu
Abstract:
The target microgrid in this paper is isolated from the large central power system and is assumed to consist of wind generators, photovoltaic power generators, an energy storage system (ESS), a diesel power generator, the community load, and a dump load. The operation of such a microgrid can be hazardous because of the uncertain prediction of power supply and demand and especially due to the high fluctuation of the output from the wind generators. In this paper, we propose an iterative replanning method for determining the appropriate level of diesel generation and the charging/discharging cycles of the ESS for the upcoming one-hour horizon. To cope with the uncertainty of the estimation of supply and demand, the one-hour plan is built repeatedly in the regular interval of one minute by rolling the one-hour horizon. Since the plan should be built with a sufficiently large safe margin to avoid any possible black-out, some energy waste through the dump load is inevitable. In our approach, the level of safe margin is optimized through learning from the past experience. The simulation experiments show that our method combined with the margin optimization can reduce the dump load compared to the method without such optimization.Keywords: microgrid, operation planning, power efficiency optimization, supply and demand prediction
Procedia PDF Downloads 4263644 The Impact of Transaction Costs on Rebalancing an Investment Portfolio in Portfolio Optimization
Authors: B. Marasović, S. Pivac, S. V. Vukasović
Abstract:
Constructing a portfolio of investments is one of the most significant financial decisions facing individuals and institutions. In accordance with the modern portfolio theory maximization of return at minimal risk should be the investment goal of any successful investor. In addition, the costs incurred when setting up a new portfolio or rebalancing an existing portfolio must be included in any realistic analysis. In this paper rebalancing an investment portfolio in the presence of transaction costs on the Croatian capital market is analyzed. The model applied in the paper is an extension of the standard portfolio mean-variance optimization model in which transaction costs are incurred to rebalance an investment portfolio. This model allows different costs for different securities, and different costs for buying and selling. In order to find efficient portfolio, using this model, first, the solution of quadratic programming problem of similar size to the Markowitz model, and then the solution of a linear programming problem have to be found. Furthermore, in the paper the impact of transaction costs on the efficient frontier is investigated. Moreover, it is shown that global minimum variance portfolio on the efficient frontier always has the same level of the risk regardless of the amount of transaction costs. Although efficient frontier position depends of both transaction costs amount and initial portfolio it can be concluded that extreme right portfolio on the efficient frontier always contains only one stock with the highest expected return and the highest risk.Keywords: Croatian capital market, Markowitz model, fractional quadratic programming, portfolio optimization, transaction costs
Procedia PDF Downloads 3753643 Process Optimization for Albanian Crude Oil Characterization
Authors: Xhaklina Cani, Ilirjan Malollari, Ismet Beqiraj, Lorina Lici
Abstract:
Oil characterization is an essential step in the design, simulation, and optimization of refining facilities. To achieve optimal crude selection and processing decisions, a refiner must have exact information refer to crude oil quality. This includes crude oil TBP-curve as the main data for correct operation of refinery crude oil atmospheric distillation plants. Crude oil is typically characterized based on a distillation assay. This procedure is reasonably well-defined and is based on the representation of the mixture of actual components that boil within a boiling point interval by hypothetical components that boil at the average boiling temperature of the interval. The crude oil assay typically includes TBP distillation according to ASTM D-2892, which can characterize this part of oil that boils up to 400 C atmospheric equivalent boiling point. To model the yield curves obtained by physical distillation is necessary to compare the differences between the modelling and the experimental data. Most commercial use a different number of components and pseudo-components to represent crude oil. Laboratory tests include distillations, vapor pressures, flash points, pour points, cetane numbers, octane numbers, densities, and viscosities. The aim of the study is the drawing of true boiling curves for different crude oil resources in Albania and to compare the differences between the modeling and the experimental data for optimal characterization of crude oil.Keywords: TBP distillation curves, crude oil, optimization, simulation
Procedia PDF Downloads 2923642 Multi-Objective Optimization of an Aerodynamic Feeding System Using Genetic Algorithm
Authors: Jan Busch, Peter Nyhuis
Abstract:
Considering the challenges of short product life cycles and growing variant diversity, cost minimization and manufacturing flexibility increasingly gain importance to maintain a competitive edge in today’s global and dynamic markets. In this context, an aerodynamic part feeding system for high-speed industrial assembly applications has been developed at the Institute of Production Systems and Logistics (IFA), Leibniz Universitaet Hannover. The aerodynamic part feeding system outperforms conventional systems with respect to its process safety, reliability, and operating speed. In this paper, a multi-objective optimisation of the aerodynamic feeding system regarding the orientation rate, the feeding velocity and the required nozzle pressure is presented.Keywords: aerodynamic feeding system, genetic algorithm, multi-objective optimization, workpiece orientation
Procedia PDF Downloads 5683641 Optimization of Technical and Technological Solutions for the Development of Offshore Hydrocarbon Fields in the Kaliningrad Region
Authors: Pavel Shcherban, Viktoria Ivanova, Alexander Neprokin, Vladislav Golovanov
Abstract:
Currently, LLC «Lukoil-Kaliningradmorneft» is implementing a comprehensive program for the development of offshore fields of the Kaliningrad region. This is largely associated with the depletion of the resource base of land in the region, as well as the positive results of geological investigation surrounding the Baltic Sea area and the data on the volume of hydrocarbon recovery from a single offshore field are working on the Kaliningrad region – D-6 «Kravtsovskoye».The article analyzes the main stages of the LLC «Lukoil-Kaliningradmorneft»’s development program for the development of the hydrocarbon resources of the region's shelf and suggests an optimization algorithm that allows managing a multi-criteria process of development of shelf deposits. The algorithm is formed on the basis of the problem of sequential decision making, which is a section of dynamic programming. Application of the algorithm during the consolidation of the initial data, the elaboration of project documentation, the further exploration and development of offshore fields will allow to optimize the complex of technical and technological solutions and increase the economic efficiency of the field development project implemented by LLC «Lukoil-Kaliningradmorneft».Keywords: offshore fields of hydrocarbons of the Baltic Sea, development of offshore oil and gas fields, optimization of the field development scheme, solution of multicriteria tasks in oil and gas complex, quality management in oil and gas complex
Procedia PDF Downloads 1913640 Web Development in Information Technology with Javascript, Machine Learning and Artificial Intelligence
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
Online developers now have the tools necessary to create online apps that are not only reliable but also highly interactive, thanks to the introduction of JavaScript frameworks and APIs. The objective is to give a broad overview of the recent advances in the area. The fusion of machine learning (ML) and artificial intelligence (AI) has expanded the possibilities for web development. Modern websites now include chatbots, clever recommendation systems, and customization algorithms built in. In the rapidly evolving landscape of modern websites, it has become increasingly apparent that user engagement and personalization are key factors for success. To meet these demands, websites now incorporate a range of innovative technologies. One such technology is chatbots, which provide users with instant assistance and support, enhancing their overall browsing experience. These intelligent bots are capable of understanding natural language and can answer frequently asked questions, offer product recommendations, and even help with troubleshooting. Moreover, clever recommendation systems have emerged as a powerful tool on modern websites. By analyzing user behavior, preferences, and historical data, these systems can intelligently suggest relevant products, articles, or services tailored to each user's unique interests. This not only saves users valuable time but also increases the chances of conversions and customer satisfaction. Additionally, customization algorithms have revolutionized the way websites interact with users. By leveraging user preferences, browsing history, and demographic information, these algorithms can dynamically adjust the website's layout, content, and functionalities to suit individual user needs. This level of personalization enhances user engagement, boosts conversion rates, and ultimately leads to a more satisfying online experience. In summary, the integration of chatbots, clever recommendation systems, and customization algorithms into modern websites is transforming the way users interact with online platforms. These advanced technologies not only streamline user experiences but also contribute to increased customer satisfaction, improved conversions, and overall website success.Keywords: Javascript, machine learning, artificial intelligence, web development
Procedia PDF Downloads 663639 Optimization Techniques of Doubly-Fed Induction Generator Controller Design for Reliability Enhancement of Wind Energy Conversion Systems
Authors: Om Prakash Bharti, Aanchal Verma, R. K. Saket
Abstract:
The Doubly-Fed Induction Generator (DFIG) is suggested for Wind Energy Conversion System (WECS) to extract wind power. DFIG is preferably employed due to its robustness towards variable wind and rotor speed. DFIG has the adaptable property because the system parameters are smoothly dealt with, including real power, reactive power, DC-link voltage, and the transient and dynamic responses, which are needed to analyze constantly. The analysis becomes more prominent during any unusual condition in the electrical power system. Hence, the study and improvement in the system parameters and transient response performance of DFIG are required to be accomplished using some controlling techniques. For fulfilling the task, the present work implements and compares the optimization methods for the design of the DFIG controller for WECS. The bio-inspired optimization techniques are applied to get the optimal controller design parameters for DFIG-based WECS. The optimized DFIG controllers are then used to retrieve the transient response performance of the six-order DFIG model with a step input. The results using MATLAB/Simulink show the betterment of the Firefly algorithm (FFA) over other control techniques when compared with the other controller design methods.Keywords: doubly-fed induction generator, wind turbine, wind energy conversion system, induction generator, transfer function, proportional, integral, derivatives
Procedia PDF Downloads 843638 Wind Turbines Optimization: Shield Structure for a High Wind Speed Conditions
Authors: Daniyar Seitenov, Nazim Mir-Nasiri
Abstract:
Optimization of horizontal axis semi-exposed wind turbine has been performed using a shield protection that automatically protects the generator shaft at extreme wind speeds from over speeding, mechanical damage and continues generating electricity during the high wind speed conditions. A semi-exposed to wind generator has been designed and its structure has been described in this paper. The simplified point-force dynamic load model on the blades has been derived for normal and extreme wind conditions with and without shield involvement. Numerical simulation has been conducted at different values of wind speed to study the efficiency of shield application. The obtained results show that the maximum power generated by the wind turbine with shield does not exceed approximately the rated value of the generator, where shield serves as an automatic break for extreme wind speed values of 15 m/sec and above. Meantime the wind turbine without shield produced a power that is much larger than the rated value. The optimized horizontal axis semi-exposed wind turbine with shield protection is suitable for low and medium power generation when installed on the roofs of high rise buildings for harvesting wind energy. Wind shield works automatically with no power consumption. The structure of the generator with the protection, math simulation of kinematics and dynamics of power generation has been described in details in this paper.Keywords: renewable energy, wind turbine, wind turbine optimization, high wind speed
Procedia PDF Downloads 1663637 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 3613636 Adaption of the Design Thinking Method for Production Planning in the Meat Industry Using Machine Learning Algorithms
Authors: Alica Höpken, Hergen Pargmann
Abstract:
The resource-efficient planning of the complex production planning processes in the meat industry and the reduction of food waste is a permanent challenge. The complexity of the production planning process occurs in every part of the supply chain, from agriculture to the end consumer. It arises from long and uncertain planning phases. Uncertainties such as stochastic yields, fluctuations in demand, and resource variability are part of this process. In the meat industry, waste mainly relates to incorrect storage, technical causes in production, or overproduction. The high amount of food waste along the complex supply chain in the meat industry could not be reduced by simple solutions until now. Therefore, resource-efficient production planning by conventional methods is currently only partially feasible. The realization of intelligent, automated production planning is basically possible through the application of machine learning algorithms, such as those of reinforcement learning. By applying the adapted design thinking method, machine learning methods (especially reinforcement learning algorithms) are used for the complex production planning process in the meat industry. This method represents a concretization to the application area. A resource-efficient production planning process is made available by adapting the design thinking method. In addition, the complex processes can be planned efficiently by using this method, since this standardized approach offers new possibilities in order to challenge the complexity and the high time consumption. It represents a tool to support the efficient production planning in the meat industry. This paper shows an elegant adaption of the design thinking method to apply the reinforcement learning method for a resource-efficient production planning process in the meat industry. Following, the steps that are necessary to introduce machine learning algorithms into the production planning of the food industry are determined. This is achieved based on a case study which is part of the research project ”REIF - Resource Efficient, Economic and Intelligent Food Chain” supported by the German Federal Ministry for Economic Affairs and Climate Action of Germany and the German Aerospace Center. Through this structured approach, significantly better planning results are achieved, which would be too complex or very time consuming using conventional methods.Keywords: change management, design thinking method, machine learning, meat industry, reinforcement learning, resource-efficient production planning
Procedia PDF Downloads 1153635 Image Compression on Region of Interest Based on SPIHT Algorithm
Authors: Sudeepti Dayal, Neelesh Gupta
Abstract:
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently.Keywords: Compression ratio, DWT, SPIHT, DCT
Procedia PDF Downloads 3393634 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.Keywords: D2 steel, orthogonal array, optimization, surface roughness, Taguchi methodology
Procedia PDF Downloads 5323633 An Application to Predict the Best Study Path for Information Technology Students in Learning Institutes
Authors: L. S. Chathurika
Abstract:
Early prediction of student performance is an important factor to be gained academic excellence. Whatever the study stream in secondary education, students lay the foundation for higher studies during the first year of their degree or diploma program in Sri Lanka. The information technology (IT) field has certain improvements in the education domain by selecting specialization areas to show the talents and skills of students. These specializations can be software engineering, network administration, database administration, multimedia design, etc. After completing the first-year, students attempt to select the best path by considering numerous factors. The purpose of this experiment is to predict the best study path using machine learning algorithms. Five classification algorithms: decision tree, support vector machine, artificial neural network, Naïve Bayes, and logistic regression are selected and tested. The support vector machine obtained the highest accuracy, 82.4%. Then affecting features are recognized to select the best study path.Keywords: algorithm, classification, evaluation, features, testing, training
Procedia PDF Downloads 1113632 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment
Authors: P. K. Singhal, R. Naresh, V. Sharma
Abstract:
This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.Keywords: artificial bee colony algorithm, economic dispatch, unit commitment, wind power
Procedia PDF Downloads 3653631 Design and Optimization of Soil Nailing Construction
Authors: Fereshteh Akbari, Farrokh Jalali Mosalam, Ali Hedayatifar, Amirreza Aminjavaheri
Abstract:
The soil nailing is an effective method to stabilize slopes and retaining structures. Consequently, the lateral and vertical displacement of retaining walls are important criteria to evaluate the safety risks of adjacent structures. This paper is devoted to the optimization problems of retaining walls based on ABAQOUS Software. The various parameters such as nail length, orientation, arrangement, horizontal spacing, and bond skin friction, on lateral and vertical displacement of retaining walls are investigated. In order to ensure accuracy, the mobilized shear stress acting around the perimeter of the nail-soil interface is also modeled in ABAQOUS software. The observed trend of results is compared to the previous researches.Keywords: retaining walls, soil nailing, ABAQOUS software, lateral displacement, vertical displacement
Procedia PDF Downloads 1133630 Functional and Efficient Query Interpreters: Principle, Application and Performances’ Comparison
Authors: Laurent Thiry, Michel Hassenforder
Abstract:
This paper presents a general approach to implement efficient queries’ interpreters in a functional programming language. Indeed, most of the standard tools actually available use an imperative and/or object-oriented language for the implementation (e.g. Java for Jena-Fuseki) but other paradigms are possible with, maybe, better performances. To proceed, the paper first explains how to model data structures and queries in a functional point of view. Then, it proposes a general methodology to get performances (i.e. number of computation steps to answer a query) then it explains how to integrate some optimization techniques (short-cut fusion and, more important, data transformations). It then compares the functional server proposed to a standard tool (Fuseki) demonstrating that the first one can be twice to ten times faster to answer queries.Keywords: data transformation, functional programming, information server, optimization
Procedia PDF Downloads 1483629 The Influence of Covariance Hankel Matrix Dimension on Algorithms for VARMA Models
Authors: Celina Pestano-Gabino, Concepcion Gonzalez-Concepcion, M. Candelaria Gil-Fariña
Abstract:
Some estimation methods for VARMA models, and Multivariate Time Series Models in general, rely on the use of a Hankel matrix. It is known that if the data sample is populous enough and the dimension of the Hankel matrix is unnecessarily large, this may result in an unnecessary number of computations as well as in numerical problems. In this sense, the aim of this paper is two-fold. First, we provide some theoretical results for these matrices which translate into a lower dimension for the matrices normally used in the algorithms. This contribution thus serves to improve those methods from a numerical and, presumably, statistical point of view. Second, we have chosen an estimation algorithm to illustrate in practice our improvements. The results we obtained in a simulation of VARMA models show that an increase in the size of the Hankel matrix beyond the theoretical bound proposed as valid does not necessarily lead to improved practical results. Therefore, for future research, we propose conducting similar studies using any of the linear system estimation methods that depend on Hankel matrices.Keywords: covariances Hankel matrices, Kronecker indices, system identification, VARMA models
Procedia PDF Downloads 2323628 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach
Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan
Abstract:
Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach
Procedia PDF Downloads 3963627 Hybrid Hierarchical Routing Protocol for WSN Lifetime Maximization
Authors: H. Aoudia, Y. Touati, E. H. Teguig, A. Ali Cherif
Abstract:
Conceiving and developing routing protocols for wireless sensor networks requires considerations on constraints such as network lifetime and energy consumption. In this paper, we propose a hybrid hierarchical routing protocol named HHRP combining both clustering mechanism and multipath optimization taking into account residual energy and RSSI measures. HHRP consists of classifying dynamically nodes into clusters where coordinators nodes with extra privileges are able to manipulate messages, aggregate data and ensure transmission between nodes according to TDMA and CDMA schedules. The reconfiguration of the network is carried out dynamically based on a threshold value which is associated with the number of nodes belonging to the smallest cluster. To show the effectiveness of the proposed approach HHRP, a comparative study with LEACH protocol is illustrated in simulations.Keywords: routing protocol, optimization, clustering, WSN
Procedia PDF Downloads 4543626 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering
Authors: Sharifah Mousli, Sona Taheri, Jiayuan He
Abstract:
Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.Keywords: autism spectrum disorder, clustering, optimization, unsupervised machine learning
Procedia PDF Downloads 1013625 A Novel Gateway Location Algorithm for Wireless Mesh Networks
Authors: G. M. Komba
Abstract:
The Internet Gateway (IGW) has extra ability than a simple Mesh Router (MR) and the responsibility to route mostly the all traffic from Mesh Clients (MCs) to the Internet backbone however, IGWs are more expensive. Choosing strategic locations for the Internet Gateways (IGWs) best location in Backbone Wireless Mesh (BWM) precarious to the Wireless Mesh Network (WMN) and the location of IGW can improve a quantity of performance related problem. In this paper, we propose a novel algorithm, namely New Gateway Location Algorithm (NGLA), which aims to achieve four objectives, decreasing the network cost effective, minimizing delay, optimizing the throughput capacity, Different from existing algorithms, the NGLA increasingly recognizes IGWs, allocates mesh routers (MRs) to identify IGWs and promises to find a feasible IGW location and install minimum as possible number of IGWs while regularly conserving the all Quality of Service (QoS) requests. Simulation results showing that the NGLA outperforms other different algorithms by comparing the number of IGWs with a large margin and it placed 40% less IGWs and 80% gain of throughput. Furthermore the NGLA is easy to implement and could be employed for BWM.Keywords: Wireless Mesh Network, Gateway Location Algorithm, Quality of Service, BWM
Procedia PDF Downloads 3583624 Pilot Scale Deproteinization Study on Fish Scale Using Response Surface Methodology
Authors: Fatima Bellali, Mariem Kharroubi
Abstract:
Fish scale wastes are one of the main sources of production of value-added products such as collagen. The main aim of this study is to investigate the optimization conditions of the sardine scale deproteinization using response surface methodology (RSM) on a pilot scale. In order to look for the optimal conditions, a Box–Behnken-based design of experiment (DOE) method was carried out. The model predicted values of product coal ash content were in good agreement with the experiment values (R2 = 0.9813). Finally, model-based optimization was carried out to identify the operating parameters (reaction time=4h and the solid-liquid ratio= 1/10) and to obtain the lowest collagen content.Keywords: pilot scale, Plackett and Burman design, fish waste, deproteinization
Procedia PDF Downloads 1473623 Optimal Tuning of a Fuzzy Immune PID Parameters to Control a Delayed System
Authors: S. Gherbi, F. Bouchareb
Abstract:
This paper deals with the novel intelligent bio-inspired control strategies, it presents a novel approach based on an optimal fuzzy immune PID parameters tuning, it is a combination of a PID controller, inspired by the human immune mechanism with fuzzy logic. Such controller offers more possibilities to deal with the delayed systems control difficulties due to the delay term. Indeed, we use an optimization approach to tune the four parameters of the controller in addition to the fuzzy function; the obtained controller is implemented in a modified Smith predictor structure, which is well known that it is the most efficient to the control of delayed systems. The application of the presented approach to control a three tank delay system shows good performances and proves the efficiency of the method.Keywords: delayed systems, fuzzy immune PID, optimization, Smith predictor
Procedia PDF Downloads 4203622 FEM for Stress Reduction by Optimal Auxiliary Holes in a Uniaxially Loaded Plate
Authors: Basavaraj R. Endigeri, Shriharsh Desphande
Abstract:
Optimization and reduction of stress concentration around holes in a uniaxially loaded plate is one of the important design criteria in many of the engineering applications. These stress risers will lead to failure of the component at the region of high stress concentration which has to be avoided by means of providing auxiliary holes on either side of the parent hole. By literature survey it is known that till date, there is no analytical solution documented to reduce the stress concentration by providing auxiliary holes expect for fever geometries. In the present work, plate with a hole subjected to uniaxial load is analyzed with the numerical method to determine the optimum sizes and locations for the auxillary holes for different center hole diameter to plate width ratios. The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 8.0 is used to carry out analysis and optimization is performed to determine the location and radii for optimum values of auxiliary holes to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. It is found from the work that introduction of auxiliary holes on either side of central circular hole will reduce stress concentration factor by a factor of 19 to 21 percentage.Keywords: finite element method, optimization, stress concentration factor, auxiliary holes
Procedia PDF Downloads 4313621 The Design, Development, and Optimization of a Capacitive Pressure Sensor Utilizing an Existing 9DOF Platform
Authors: Andrew Randles, Ilker Ocak, Cheam Daw Don, Navab Singh, Alex Gu
Abstract:
Nine Degrees of Freedom (9 DOF) systems are already in development in many areas. In this paper, an integrated pressure sensor is proposed that will make use of an already existing monolithic 9 DOF inertial MEMS platform. Capacitive pressure sensors can suffer from limited sensitivity for a given size of membrane. This novel pressure sensor design increases the sensitivity by over 5 times compared to a traditional array of square diaphragms while still fitting within a 2 mm x 2 mm chip and maintaining a fixed static capacitance. The improved design uses one large diaphragm supported by pillars with fixed electrodes placed above the areas of maximum deflection. The design optimization increases the sensitivity from 0.22 fF/kPa to 1.16 fF/kPa. Temperature sensitivity was also examined through simulation.Keywords: capacitive pressure sensor, 9 DOF, 10 DOF, sensor, capacitive, inertial measurement unit, IMU, inertial navigation system, INS
Procedia PDF Downloads 537