Search results for: bi-level optimization model
18011 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism
Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le
Abstract:
This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.Keywords: flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment
Procedia PDF Downloads 33118010 Spectrum Allocation in Cognitive Radio Using Monarch Butterfly Optimization
Authors: Avantika Vats, Kushal Thakur
Abstract:
This paper displays the point at issue, improvement, and utilization of a Monarch Butterfly Optimization (MBO) rather than a Genetic Algorithm (GA) in cognitive radio for the channel portion. This approach offers a satisfactory approach to get the accessible range of both the users, i.e., primary users (PUs) and secondary users (SUs). The proposed enhancement procedure depends on a nature-inspired metaheuristic algorithm. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. Southern Canada and the northern USA (land 1), and Mexico (Land 2). The positions of the monarch butterflies are modernizing in two ways. At first, the offsprings are generated (position updating) by the migration operator and can be adjusted by the migration ratio. It is trailed by tuning the positions for different butterflies by the methods for the butterfly adjusting operator. To keep the population unaltered and minimize fitness evaluations, the aggregate of the recently produced butterflies in these two ways stays equivalent to the first population. The outcomes obviously display the capacity of the MBO technique towards finding the upgraded work values on issues regarding the genetic algorithm.Keywords: cognitive radio, channel allocation, monarch butterfly optimization, evolutionary, computation
Procedia PDF Downloads 7318009 Optimal Design of Multi-Machine Power System Stabilizers Using Interactive Honey Bee Mating Optimization
Authors: Hossein Ghadimi, Alireza Alizadeh, Oveis Abedinia, Noradin Ghadimi
Abstract:
This paper presents an enhanced Honey Bee Mating Optimization (HBMO) to solve the optimal design of multi machine power system stabilizer (PSSs) parameters, which is called the Interactive Honey Bee Mating Optimization (IHBMO). Power System Stabilizers (PSSs) are now routinely used in the industry to damp out power system oscillations. The design problem of the proposed controller is formulated as an optimization problem and IHBMO algorithm is employed to search for optimal controller parameters. The proposed method is applied to multi-machine power system (MPS). The method suggested in this paper can be used for designing robust power system stabilizers for guaranteeing the required closed loop performance over a prespecified range of operating and system conditions. The simplicity in design and implementation of the proposed stabilizers makes them better suited for practical applications in real plants. The non-linear simulation results are presented under wide range of operating conditions in comparison with the PSO and CPSS base tuned stabilizer one through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers.Keywords: power system stabilizer, IHBMO, multimachine, nonlinearities
Procedia PDF Downloads 50718008 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 9018007 Hybrid CNN-SAR and Lee Filtering for Enhanced InSAR Phase Unwrapping and Coherence Optimization
Authors: Hadj Sahraoui Omar, Kebir Lahcen Wahib, Bennia Ahmed
Abstract:
Interferometric Synthetic Aperture Radar (InSAR) coherence is a crucial parameter for accurately monitoring ground deformation and environmental changes. However, coherence can be degraded by various factors such as temporal decorrelation, atmospheric disturbances, and geometric misalignments, limiting the reliability of InSAR measurements (Omar Hadj‐Sahraoui and al. 2019). To address this challenge, we propose an innovative hybrid approach that combines artificial intelligence (AI) with advanced filtering techniques to optimize interferometric coherence in InSAR data. Specifically, we introduce a Convolutional Neural Network (CNN) integrated with the Lee filter to enhance the performance of radar interferometry. This hybrid method leverages the strength of CNNs to automatically identify and mitigate the primary sources of decorrelation, while the Lee filter effectively reduces speckle noise, improving the overall quality of interferograms. We develop a deep learning-based model trained on multi-temporal and multi-frequency SAR datasets, enabling it to predict coherence patterns and enhance low-coherence regions. This hybrid CNN-SAR with Lee filtering significantly reduces noise and phase unwrapping errors, leading to more precise deformation maps. Experimental results demonstrate that our approach improves coherence by up to 30% compared to traditional filtering techniques, making it a robust solution for challenging scenarios such as urban environments, vegetated areas, and rapidly changing landscapes. Our method has potential applications in geohazard monitoring, urban planning, and environmental studies, offering a new avenue for enhancing InSAR data reliability through AI-powered optimization combined with robust filtering techniques.Keywords: CNN-SAR, Lee Filter, hybrid optimization, coherence, InSAR phase unwrapping, speckle noise reduction
Procedia PDF Downloads 1118006 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model
Authors: Bokkasam Sasidhar, Ibrahim Aljasser
Abstract:
The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.Keywords: scheduling, maximal flow problem, multiple arc network model, optimization
Procedia PDF Downloads 40218005 Optimum Design of Helical Gear System on Basis of Maximum Power Transmission Capability
Authors: Yasaman Esfandiari
Abstract:
Mechanical engineering has always dealt with amplification of the input power in power trains. One of the ways to achieve this goal is to use gears to change the amplitude and direction of the torque and the speed. However, the gears should be optimally designed to best achieve these objectives. In this study, helical gear systems are optimized to achieve maximum power. Material selection, space restriction, available facilities for manufacturing, the probability of tooth breakage, and tooth wear are taken into account and governing equations are derived. Finally, a Matlab code was generated to solve the optimization problem and the results are verified.Keywords: design, gears, Matlab, optimization
Procedia PDF Downloads 24018004 Reliability Analysis of Variable Stiffness Composite Laminate Structures
Authors: A. Sohouli, A. Suleman
Abstract:
This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures
Procedia PDF Downloads 51918003 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision
Procedia PDF Downloads 12618002 Influence of Radio Frequency Identification Technology at Cost of Supply Chain as a Driver for the Generation of Competitive Advantage
Authors: Mona Baniahmadi, Saied Haghanifar
Abstract:
Radio Frequency Identification (RFID) is regarded as a promising technology for the optimization of supply chain processes since it improves manufacturing and retail operations from forecasting demand for planning, managing inventory, and distribution. This study precisely aims at learning to know the RFID technology and at explaining how it can concretely be used for supply chain management and how it can help improving it in the case of Hejrat Company which is located in Iran and works on the distribution of medical drugs and cosmetics. This study uses some statistical analysis to calculate the expected benefits of an integrated RFID system on supply chain obtained through competitive advantages increases with decreasing cost factor. The study investigates how the cost of storage process, labor cost, the cost of missing goods, inventory management optimization, on-time delivery, order cost, lost sales and supply process optimization affect the performance of the integrated RFID supply chain regarding cost factors and provides a competitive advantage.Keywords: cost, competitive advantage, radio frequency identification, supply chain
Procedia PDF Downloads 27618001 A Linear Programming Approach to Assist Roster Construction Under a Salary Cap
Authors: Alex Contarino
Abstract:
Professional sports leagues often have a “free agency” period, during which teams may sign players with expiring contracts.To promote parity, many leagues operate under a salary cap that limits the amount teams can spend on player’s salaries in a given year. Similarly, in fantasy sports leagues, salary cap drafts are a popular method for selecting players. In order to sign a free agent in either setting, teams must bid against one another to buy the player’s services while ensuring the sum of their player’s salaries is below the salary cap. This paper models the bidding process for a free agent as a constrained optimization problem that can be solved using linear programming. The objective is to determine the largest bid that a team should offer the player subject to the constraint that the value of signing the player must exceed the value of using the salary cap elsewhere. Iteratively solving this optimization problem for each available free agent provides teams with an effective framework for maximizing the talent on their rosters. The utility of this approach is demonstrated for team sport roster construction and fantasy sport drafts, using recent data sets from both settings.Keywords: linear programming, optimization, roster management, salary cap
Procedia PDF Downloads 11118000 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 11717999 Application of Hybrid Honey Bees Mating Optimization Algorithm in Multiuser Detection of Wireless Communication Systems
Abstract:
Wireless communication systems have changed dramatically and shown spectacular evolution over the past two decades. These radio technologies are engaged in a quest endless high-speed transmission coupled to a constant need to improve transmission quality. Various radio communication systems being developed use code division multiple access (CDMA) technique. This work analyses a hybrid honey bees mating optimization algorithm (HBMO) applied to multiuser detection (MuD) in CDMA communication systems. The HBMO is a swarm-based optimization algorithm, which simulates the mating process of real honey bees. We apply a hybridization of HBMO with simulated annealing (SA) in order to improve the solution generated by the HBMO. Simulation results show that the detection based on Hybrid HBMO, in term of bit error rate (BER), is viable option when compared with the classic detectors from literature under Rayleigh flat fading channel.Keywords: BER, DS-CDMA multiuser detection, genetic algorithm, hybrid HBMO, simulated annealing
Procedia PDF Downloads 43517998 On the Application of Heuristics of the Traveling Salesman Problem for the Task of Restoring the DNA Matrix
Authors: Boris Melnikov, Dmitrii Chaikovskii, Elena Melnikova
Abstract:
The traveling salesman problem (TSP) is a well-known optimization problem that seeks to find the shortest possible route that visits a set of points and returns to the starting point. In this paper, we apply some heuristics of the TSP for the task of restoring the DNA matrix. This restoration problem is often considered in biocybernetics. For it, we must recover the matrix of distances between DNA sequences if not all the elements of the matrix under consideration are known at the input. We consider the possibility of using this method in the testing of distance calculation algorithms between a pair of DNAs to restore the partially filled matrix.Keywords: optimization problems, DNA matrix, partially filled matrix, traveling salesman problem, heuristic algorithms
Procedia PDF Downloads 15017997 Modeling and Optimization of Algae Oil Extraction Using Response Surface Methodology
Authors: I. F. Ejim, F. L. Kamen
Abstract:
Aims: In this experiment, algae oil extraction with a combination of n-hexane and ethanol was investigated. The effects of extraction solvent concentration, extraction time and temperature on the yield and quality of oil were studied using Response Surface Methodology (RSM). Experimental Design: Optimization of algae oil extraction using Box-Behnken design was used to generate 17 experimental runs in a three-factor-three-level design where oil yield, specific gravity, acid value and saponification value were evaluated as the response. Result: In this result, a minimum oil yield of 17% and maximum of 44% was realized. The optimum values for yield, specific gravity, acid value and saponification value from the overlay plot were 40.79%, 0.8788, 0.5056 mg KOH/g and 180.78 mg KOH/g respectively with desirability of 0.801. The maximum point prediction was yield 40.79% at solvent concentration 66.68 n-hexane, temperature of 40.0°C and extraction time of 4 hrs. Analysis of Variance (ANOVA) results showed that the linear and quadratic coefficient were all significant at p<0.05. The experiment was validated and results obtained were with the predicted values. Conclusion: Algae oil extraction was successfully optimized using RSM and its quality indicated it is suitable for many industrial uses.Keywords: algae oil, response surface methodology, optimization, Box-Bohnken, extraction
Procedia PDF Downloads 33817996 Seismic Response Mitigation of Structures Using Base Isolation System Considering Uncertain Parameters
Authors: Rama Debbarma
Abstract:
The present study deals with the performance of Linear base isolation system to mitigate seismic response of structures characterized by random system parameters. This involves optimization of the tuning ratio and damping properties of the base isolation system considering uncertain system parameters. However, the efficiency of base isolator may reduce if it is not tuned to the vibrating mode it is designed to suppress due to unavoidable presence of system parameters uncertainty. With the aid of matrix perturbation theory and first order Taylor series expansion, the total probability concept is used to evaluate the unconditional response of the primary structures considering random system parameters. For this, the conditional second order information of the response quantities are obtained in random vibration framework using state space formulation. Subsequently, the maximum unconditional root mean square displacement of the primary structures is used as the objective function to obtain optimum damping parameters Numerical study is performed to elucidate the effect of parameters uncertainties on the optimization of parameters of linear base isolator and system performance.Keywords: linear base isolator, earthquake, optimization, uncertain parameters
Procedia PDF Downloads 43417995 Bounded Solution Method for Geometric Programming Problem with Varying Parameters
Authors: Abdullah Ali H. Ahmadini, Firoz Ahmad, Intekhab Alam
Abstract:
Geometric programming problem (GPP) is a well-known non-linear optimization problem having a wide range of applications in many engineering problems. The structure of GPP is quite dynamic and easily fit to the various decision-making processes. The aim of this paper is to highlight the bounded solution method for GPP with special reference to variation among right-hand side parameters. Thus this paper is taken the advantage of two-level mathematical programming problems and determines the solution of the objective function in a specified interval called lower and upper bounds. The beauty of the proposed bounded solution method is that it does not require sensitivity analyses of the obtained optimal solution. The value of the objective function is directly calculated under varying parameters. To show the validity and applicability of the proposed method, a numerical example is presented. The system reliability optimization problem is also illustrated and found that the value of the objective function lies between the range of lower and upper bounds, respectively. At last, conclusions and future research are depicted based on the discussed work.Keywords: varying parameters, geometric programming problem, bounded solution method, system reliability optimization
Procedia PDF Downloads 13317994 Hybrid Energy System for the German Mining Industry: An Optimized Model
Authors: Kateryna Zharan, Jan C. Bongaerts
Abstract:
In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy
Procedia PDF Downloads 34317993 Methodology of Preliminary Design and Performance of a Axial-Flow Fan through CFD
Authors: Ramiro Gustavo Ramirez Camacho, Waldir De Oliveira, Eraldo Cruz Dos Santos, Edna Raimunda Da Silva, Tania Marie Arispe Angulo, Carlos Eduardo Alves Da Costa, Tânia Cristina Alves Dos Reis
Abstract:
It presents a preliminary design methodology of an axial fan based on the lift wing theory and the potential vortex hypothesis. The literature considers a study of acoustic and engineering expertise to model a fan with low noise. Axial fans with inadequate intake geometry, often suffer poor condition of the flow at the entrance, varying from velocity profiles spatially asymmetric to swirl floating with respect to time, this produces random forces acting on the blades. This produces broadband gust noise which in most cases triggers the tonal noise. The analysis of the axial flow fan will be conducted for the solution of the Navier-Stokes equations and models of turbulence in steady and transitory (RANS - URANS) 3-D, in order to find an efficient aerodynamic design, with low noise and suitable for industrial installation. Therefore, the process will require the use of computational optimization methods, aerodynamic design methodologies, and numerical methods as CFD- Computational Fluid Dynamics. The objective is the development of the methodology of the construction axial fan, provide of design the geometry of the blade, and evaluate aerodynamic performanceKeywords: Axial fan design, CFD, Preliminary Design, Optimization
Procedia PDF Downloads 39617992 Analysis of a CO₂ Two-Phase Ejector Performances with Taguchi and Anova Optimization
Authors: Karima Megdouli
Abstract:
The ejector, a central element within the CO₂ transcritical ejection refrigeration system, holds significant importance in enhancing refrigeration capacity and minimizing compressor power usage. This study's objective is to introduce a technique for enhancing the effectiveness of the CO₂ transcritical two-phase ejector, utilizing Taguchi and ANOVA analysis. The investigation delves into the impact of geometric parameters, secondary flow temperature, and primary flow pressure on the efficiency of the ejector. Results indicate that employing a combination of Taguchi and ANOVA offers increased reliability and superior performance when optimizing the design of the CO₂ two-phase ejector.Keywords: ejector, supersonic, Taguchi, ANOVA, optimization
Procedia PDF Downloads 8817991 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives
Authors: Chen Guo, Heng Tang, Ben Niu
Abstract:
Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives
Procedia PDF Downloads 13917990 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 72717989 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling, and proposes the challenges and improvement directions for DRL-based resource scheduling algorithms.Keywords: resource scheduling, deep reinforcement learning, distributed system, artificial intelligence
Procedia PDF Downloads 11117988 Design and Optimization of Flow Field for Cavitation Reduction of Valve Sleeves
Authors: Kamal Upadhyay, Zhou Hua, Yu Rui
Abstract:
This paper aims to improve the streamline linked with the flow field and cavitation on the valve sleeve. We observed that local pressure fluctuation produces a low-pressure zone, central to the formation of vapor volume fraction within the valve chamber led to air-bubbles (or cavities). Thus, it allows simultaneously to a severe negative impact on the inner surface and lifespan of the valve sleeves. Cavitation reduction is a vitally important issue to pressure control valves. The optimization of the flow field is proposed in this paper to reduce the cavitation of valve sleeves. In this method, the inner wall of the valve sleeve is changed from a cylindrical surface to the conical surface, leading to the decline of the fluid flow velocity and the rise of the outlet pressure. Besides, the streamline is distributed inside the sleeve uniformly. Thus, the bubble generation is lessened. The fluid models are built and analysis of flow field distribution, pressure, vapor volume and velocity was carried out using computational fluid dynamics (CFD) and numerical technique. The results indicate that this structure can suppress the cavitation of valve sleeves effectively.Keywords: streamline, cavitation, optimization, computational fluid dynamics
Procedia PDF Downloads 14517987 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems
Authors: Hala Zaghloul, Taymoor Nazmy
Abstract:
One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.Keywords: cognitive system, image processing, segmentation, PCNN kernels
Procedia PDF Downloads 28017986 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 8017985 Removal of Chromium (VI) from Aqueous Solution by Teff (Eragrostis Teff) Husk Activated Carbon: Optimization, Kinetics, Isotherm, and Practical Adaptation Study Using Response Surface Methodology
Authors: Tsegaye Adane Birhan
Abstract:
Recently, rapid industrialization has led to the excessive release of heavy metals such as Cr (VI) into the environment. Exposure to chromium (VI) can cause kidney and liver damage, depressed immune systems, and a variety of cancers. Therefore, treatment of Cr (VI) containing wastewater is mandatory. This study aims to optimize the removal of Cr (VI) from an aqueous solution using locally available Teff husk-activated carbon adsorbent. The laboratory-based study was conducted on the optimization of Cr (VI) removal efficiency of Teff husk-activated carbon from aqueous solution. A central composite design was used to examine the effect of the interaction of process parameters and to optimize the process using Design Expert version 7.0 software. The optimized removal efficiency of Teff husk activated carbon (95.597%) was achieved at 1.92 pH, 87.83mg/L initial concentration, 20.22g/L adsorbent dose and 2.07Hrs contact time. The adsorption of Cr (VI) on Teff husk-activated carbon was found to be best fitted with pseudo-second-order kinetics and Langmuir isotherm model of the adsorption. Teff husk-activated carbon can be used as an efficient adsorbent for the removal of chromium (VI) from contaminated water. Column adsorption needs to be studied in the future.Keywords: batch adsorption, chromium (VI), teff husk activated carbon, response surface methodology, tannery wastewater
Procedia PDF Downloads 817984 Soil Parameters Identification around PMT Test by Inverse Analysis
Authors: I. Toumi, Y. Abed, A. Bouafia
Abstract:
This paper presents a methodology for identifying the cohesive soil parameters that takes into account different constitutive equations. The procedure, applied to identify the parameters of generalized Prager model associated to the Drucker & Prager failure criterion from a pressuremeter expansion curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the simulated curve using a simplex algorithm. The model response on pressuremeter path and its identification from experimental data lead to the determination of the friction angle, the cohesion and the Young modulus. Some parameters effects on the simulated curves and stresses path around pressuremeter probe are presented. Comparisons between the parameters determined with the proposed method and those obtained by other means are also presented.Keywords: cohesive soils, cavity expansion, pressuremeter test, finite element method, optimization procedure, simplex algorithm
Procedia PDF Downloads 29417983 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models
Authors: Sam Khozama, Ali M. Mayya
Abstract:
Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.Keywords: machine learning, deep learning, cancer prediction, breast cancer, LSTM, fusion
Procedia PDF Downloads 16317982 Physical Parameters Influencing the Yield of Nigella Sativa Oil Extracted by Hydraulic Pressing
Authors: Hadjadj Naima, K. Mahdi, D. Belhachat, F. S. Ait Chaouche, A. Ferradji
Abstract:
The Nigella Sativa oil yield extracted by hydraulic pressing is influenced by the pressure temperature and size particles. The optimization of oil extraction is investigated. The rate of extraction of the whole seeds is very weak, a crushing of seeds is necessary to facilitate the extraction. This rate augments with the rise of the temperature and the pressure, and decrease of size particles. The best output (66%) is obtained for a granulometry lower than 1mm, a temperature of 50°C and a pressure of 120 bars.Keywords: oil, Nigella sativa, extraction, optimization, temperature, pressure
Procedia PDF Downloads 480