Search results for: Multi Objective Particle Swarm Optimization (MOPSO)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13898

Search results for: Multi Objective Particle Swarm Optimization (MOPSO)

13448 On Multiobjective Optimization to Improve the Scalability of Fog Application Deployments Using Fogtorch

Authors: Suleiman Aliyu

Abstract:

Integrating IoT applications with Fog systems presents challenges in optimization due to diverse environments and conflicting objectives. This study explores achieving Pareto optimal deployments for Fog-based IoT systems to address growing QoS demands. We introduce Pareto optimality to balance competing performance metrics. Using the FogTorch optimization framework, we propose a hybrid approach (Backtracking search with branch and bound) for scalable IoT deployments. Our research highlights the advantages of Pareto optimality over single-objective methods and emphasizes the role of FogTorch in this context. Initial results show improvements in IoT deployment cost in Fog systems, promoting resource-efficient strategies.

Keywords: pareto optimality, fog application deployment, resource allocation, internet of things

Procedia PDF Downloads 50
13447 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme

Authors: Cavidan Yakupoglu, Kurt Rohloff

Abstract:

In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.

Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE

Procedia PDF Downloads 123
13446 Cylindrical Spacer Shape Optimization for Enhanced Inhalation Therapy

Authors: Shahab Azimi, Siamak Arzanpour, Anahita Sayyar

Abstract:

Asthma and Chronic obstructive pulmonary disease (COPD) are common lung diseases that have a significant global impact. Pressurized metered dose inhalers (pMDIs) are widely used for treatment, but they can have limitations such as high medication release speed resulting in drug deposition in the mouth or oral cavity and difficulty achieving proper synchronization with inhalation by users. Spacers are add-on devices that improve the efficiency of pMDIs by reducing the release speed and providing space for aerosol particle breakup to have finer and medically effective medication. The aim of this study is to optimize the size and cylindrical shape of spacers to enhance their drug delivery performance. The study was based on fluid dynamics theory and employed Ansys software for simulation and optimization. Results showed that optimization of the spacer's geometry greatly influenced its performance and improved drug delivery. This study provides a foundation for future research on enhancing the efficiency of inhalation therapy for lung diseases.

Keywords: asthma, COPD, pressurized metered dose inhalers, spacers, CFD, shape optimization

Procedia PDF Downloads 61
13445 Influence of Processing Parameters on the Reliability of Sieving as a Particle Size Distribution Measurements

Authors: Eseldin Keleb

Abstract:

In the pharmaceutical industry particle size distribution is an important parameter for the characterization of pharmaceutical powders. The powder flowability, reactivity and compatibility, which have a decisive impact on the final product, are determined by particle size and size distribution. Therefore, the aim of this study was to evaluate the influence of processing parameters on the particle size distribution measurements. Different Size fractions of α-lactose monohydrate and 5% polyvinylpyrrolidone were prepared by wet granulation and were used for the preparation of samples. The influence of sieve load (50, 100, 150, 200, 250, 300, and 350 g), processing time (5, 10, and 15 min), sample size ratios (high percentage of small and large particles), type of disturbances (vibration and shaking) and process reproducibility have been investigated. Results obtained showed that a sieve load of 50 g produce the best separation, a further increase in sample weight resulted in incomplete separation even after the extension of the processing time for 15 min. Performing sieving using vibration was rapider and more efficient than shaking. Meanwhile between day reproducibility showed that particle size distribution measurements are reproducible. However, for samples containing 70% fines or 70% large particles, which processed at optimized parameters, the incomplete separation was always observed. These results indicated that sieving reliability is highly influenced by the particle size distribution of the sample and care must be taken for samples with particle size distribution skewness.

Keywords: sieving, reliability, particle size distribution, processing parameters

Procedia PDF Downloads 583
13444 A Numerical and Experimental Study on Fast Pyrolysis of Single Wood Particle

Authors: Hamid Rezaei, Xiaotao Bi, C. Jim Lim, Anthony Lau, Shahab Sokhansanj

Abstract:

A one-dimensional heat transfer model coupled with the kinetic information has been used to predict the overall pyrolysis mass loss of a single wood particle. The kinetic parameters were determined experimentally and the regime and characteristics of the conversion were evaluated in terms of the particle size and reactor temperature. The order of overall mass loss changed from n=1 at temperatures lower than 350 °C to n=0.5 at temperatures higher that 350 °C. Conversion time analysis showed that particles larger than 0.5 mm were controlled by internal thermal resistances. The valid range of particle size to use the simplified lumped model depends on the fluid temperature around the particles. The critical particle size was 0.6-0.7 mm for the fluid temperature of 500 °C and 0.9-1.0 mm for the fluid temperature of 100 °C. Experimental pyrolysis of moist particles did not show distinct drying and pyrolysis stages. The process was divided into two hypothetical drying and pyrolysis dominated zones and empirical correlations are developed to predict the rate of mass loss in each zone.

Keywords: pyrolysis, kinetics, model, single particle

Procedia PDF Downloads 292
13443 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production

Authors: Divya Vats, Sanjay Mahajani

Abstract:

The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.

Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity

Procedia PDF Downloads 44
13442 A Simplified, Fabrication-Friendly Acoustophoretic Model for Size Sensitive Particle Sorting

Authors: V. Karamzadeh, J. Adhvaryu, A. Chandrasekaran, M. Packirisamy

Abstract:

In Bulk Acoustic Wave (BAW) microfluidics, the throughput of particle sorting is dependent on the complex interplay between the geometric configuration of the channel, the size of the particles, and the properties of the fluid medium, which therefore calls for a detailed modeling and understanding of the fluid-particle interaction dynamics under an acoustic field, prior to designing the system. In this work, we propose a simplified Bulk acoustophoretic system that can be used for size dependent particle sorting. A Finite Element Method (FEM) based analytical model has been developed to study the dependence of particle sizes on channel parameters, and the sorting efficiency in a given fluid medium. Based on the results, the microfluidic system has been designed to take into account all the variables involved with the underlying physics, and has been fabricated using an additive manufacturing technique employing a commercial 3D printer, to generate a simple, cost-effective system that can be used for size sensitive particle sorting.

Keywords: 3D printing, 3D microfluidic chip, acoustophoresis, cell separation, MEMS (Microelectromechanical Systems), microfluidics

Procedia PDF Downloads 146
13441 Effect of Alloying Elements on Particle Incorporation of Boron Carbide Reinforced Aluminum Matrix Composites

Authors: Steven Ploetz, Andreas Lohmueller, Robert F. Singer

Abstract:

The outstanding performance of aluminum matrix composites (AMCs) regarding stiffness/weight ratio makes AMCs attractive material for lightweight construction. Low-density boride compounds promise simultaneously an increase in stiffness and decrease in composite density. This is why boron carbide is chosen for composite manufacturing. The composites are fabricated with the stir casting process. To avoid gas entrapment during mixing and ensure nonporous composites, partial vacuum is adapted during particle feeding and stirring. Poor wettability of boron carbide with liquid aluminum hinders particle incorporation, but alloying elements such as magnesium and titanium could improve wettability and thus particle incorporation. Next to alloying elements, adapted stirring parameters and impeller geometries improve particle incorporation and enable homogenous particle distribution and high particle volume fractions of boron carbide. AMCs with up to 15 vol.% of boron carbide particles are produced via melt stirring, resulting in an increase in stiffness and strength.

Keywords: aluminum matrix composites, boron carbide, stiffness, stir casting

Procedia PDF Downloads 290
13440 Geometric Optimization of Catalytic Converter

Authors: P. Makendran, M. Pragadeesh, N. Narash, N. Manikandan, A. Rajasri, V. Sanal Kumar

Abstract:

The growing severity of government-obligatory emissions legislation has required continuous improvement in catalysts performance and the associated reactor systems. IC engines emit a lot of harmful gases into the atmosphere. These gases are toxic in nature and a catalytic converter is used to convert these toxic gases into less harmful gases. The catalytic converter converts these gases by Oxidation and reduction reaction. Stoichiometric engines usually use the three-way catalyst (TWC) for simultaneously destroying all of the emissions. CO and NO react to form CO2 and N2 over one catalyst, and the remaining CO and HC are oxidized in a subsequent one. Literature review reveals that typically precious metals are used as a catalyst. The actual reactor is composed of a washcoated honeycomb-style substrate, with the catalyst being contained in the washcoat. The main disadvantage of a catalytic converter is that it exerts a back pressure to the exhaust gases while entering into them. The objective of this paper is to optimize the back pressure developed by the catalytic converter through geometric optimization of catalystic converter. This can be achieved by designing a catalyst with a optimum cone angle and a more surface area of the catalyst substrate. Additionally, the arrangement of the pores in the catalyst substrate can be changed. The numerical studies have been carried out using k-omega turbulence model with varying inlet angle of the catalytic converter and the length of the catalyst substrate. We observed that the geometry optimization is a meaningful objective for the lucrative design optimization of a catalytic converter for industrial applications.

Keywords: catalytic converter, emission control, reactor systems, substrate for emission control

Procedia PDF Downloads 878
13439 Feature Selection for Production Schedule Optimization in Transition Mines

Authors: Angelina Anani, Ignacio Ortiz Flores, Haitao Li

Abstract:

The use of underground mining methods have increased significantly over the past decades. This increase has also been spared on by several mines transitioning from surface to underground mining. However, determining the transition depth can be a challenging task, especially when coupled with production schedule optimization. Several researchers have simplified the problem by excluding operational features relevant to production schedule optimization. Our research objective is to investigate the extent to which operational features of transition mines accounted for affect the optimal production schedule. We also provide a framework for factors to consider in production schedule optimization for transition mines. An integrated mixed-integer linear programming (MILP) model is developed that maximizes the NPV as a function of production schedule and transition depth. A case study is performed to validate the model, with a comparative sensitivity analysis to obtain operational insights.

Keywords: underground mining, transition mines, mixed-integer linear programming, production schedule

Procedia PDF Downloads 139
13438 Green Closed-Loop Supply Chain Network Design Considering Different Production Technologies Levels and Transportation Modes

Authors: Mahsa Oroojeni Mohammad Javad

Abstract:

Globalization of economic activity and rapid growth of information technology has resulted in shorter product lifecycles, reduced transport capacity, dynamic and changing customer behaviors, and an increased focus on supply chain design in recent years. The design of the supply chain network is one of the most important supply chain management decisions. These decisions will have a long-term impact on the efficacy and efficiency of the supply chain. In this paper, a two-objective mixed-integer linear programming (MILP) model is developed for designing and optimizing a closed-loop green supply chain network that, to the greatest extent possible, includes all real-world assumptions such as multi-level supply chain, the multiplicity of production technologies, and multiple modes of transportation, with the goals of minimizing the total cost of the chain (first objective) and minimizing total emissions of emissions (second objective). The ε-constraint and CPLEX Solver have been used to solve the problem as a single-objective problem and validate the problem. Finally, the sensitivity analysis is applied to study the effect of the real-world parameters’ changes on the objective function. The optimal management suggestions and policies are presented.

Keywords: closed-loop supply chain, multi-level green supply chain, mixed-integer programming, transportation modes

Procedia PDF Downloads 50
13437 Economic Optimization of Shell and Tube Heat Exchanger Using Nanofluid

Authors: Hassan Hajabdollahi

Abstract:

Economic optimization of shell and tube heat exchanger (STHE) is presented in this paper. To increase the rate of heat transfer, copper oxide (CuO) nanoparticle is added into the tube side fluid and their optimum results are compared with the case of without additive nanoparticle. Total annual cost (TAC) is selected as fitness function and nine decision variables related to the heat exchanger parameters as well as concentration of nanoparticle are considered. Optimization results reveal the noticeable improvement in the TAC and in the case of heat exchanger working with nanofluid compared with the case of base fluid (8.9%). Comparison of the results between two studied cases also reveal that the lower tube diameter, tube number, and baffle spacing are needed in the case of heat exchanger working with nanofluid compared with the case of base fluid.

Keywords: shell and tube heat exchanger, nanoparticles additive, total annual cost, particle volumetric concentration

Procedia PDF Downloads 392
13436 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 169
13435 A Robust Optimization for Multi-Period Lost-Sales Inventory Control Problem

Authors: Shunichi Ohmori, Sirawadee Arunyanart, Kazuho Yoshimoto

Abstract:

We consider a periodic review inventory control problem of minimizing production cost, inventory cost, and lost-sales under demand uncertainty, in which product demands are not specified exactly and it is only known to belong to a given uncertainty set, yet the constraints must hold for possible values of the data from the uncertainty set. We propose a robust optimization formulation for obtaining lowest cost possible and guaranteeing the feasibility with respect to range of order quantity and inventory level under demand uncertainty. Our formulation is based on the adaptive robust counterpart, which suppose order quantity is affine function of past demands. We derive certainty equivalent problem via second-order cone programming, which gives 'not too pessimistic' worst-case.

Keywords: robust optimization, inventory control, supply chain managment, second-order programming

Procedia PDF Downloads 381
13434 Parameter Optimization and Thermal Simulation in Laser Joining of Coach Peel Panels of Dissimilar Materials

Authors: Masoud Mohammadpour, Blair Carlson, Radovan Kovacevic

Abstract:

The quality of laser welded-brazed (LWB) joints were strongly dependent on the main process parameters, therefore the effect of laser power (3.2–4 kW), welding speed (60–80 mm/s) and wire feed rate (70–90 mm/s) on mechanical strength and surface roughness were investigated in this study. The comprehensive optimization process by means of response surface methodology (RSM) and desirability function was used for multi-criteria optimization. The experiments were planned based on Box– Behnken design implementing linear and quadratic polynomial equations for predicting the desired output properties. Finally, validation experiments were conducted on an optimized process condition which exhibited good agreement between the predicted and experimental results. AlSi3Mn1 was selected as the filler material for joining aluminum alloy 6022 and hot-dip galvanized steel in coach peel configuration. The high scanning speed could control the thickness of IMC as thin as 5 µm. The thermal simulations of joining process were conducted by the Finite Element Method (FEM), and results were validated through experimental data. The Fe/Al interfacial thermal history evidenced that the duration of critical temperature range (700–900 °C) in this high scanning speed process was less than 1 s. This short interaction time leads to the formation of reaction-control IMC layer instead of diffusion-control mechanisms.

Keywords: laser welding-brazing, finite element, response surface methodology (RSM), multi-response optimization, cross-beam laser

Procedia PDF Downloads 333
13433 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making

Authors: Serhat Tuzun, Tufan Demirel

Abstract:

Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.

Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy

Procedia PDF Downloads 196
13432 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 86
13431 Advanced Technologies for Detector Readout in Particle Physics

Authors: Y. Venturini, C. Tintori

Abstract:

Given the continuous demand for improved readout performances in particle and dark matter physics, CAEN SpA is pushing on the development of advanced technologies for detector readout. We present the Digitizers 2.0, the result of the success of the previous Digitizers generation, combined with expanded capabilities and a renovation of the user experience introducing the open FPGA. The first product of the family is the VX2740 (64 ch, 125 MS/s, 16 bit) for advanced waveform recording and Digital Pulse Processing, fitting with the special requirements of Dark Matter and Neutrino experiments. In parallel, CAEN is developing the FERS-5200 platform, a Front-End Readout System designed to read out large multi-detector arrays, such as SiPMs, multi-anode PMTs, silicon strip detectors, wire chambers, GEM, gas tubes, and others. This is a highly-scalable distributed platform, based on small Front-End cards synchronized and read out by a concentrator board, allowing to build extremely large experimental setup. We plan to develop a complete family of cost-effective Front-End cards tailored to specific detectors and applications. The first one available is the A5202, a 64-channel unit for SiPM readout based on CITIROC ASIC by Weeroc.

Keywords: dark matter, digitizers, front-end electronics, open FPGA, SiPM

Procedia PDF Downloads 101
13430 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters

Authors: Nina Philipova

Abstract:

The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.

Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model

Procedia PDF Downloads 130
13429 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO

Procedia PDF Downloads 413
13428 Passive Vibration Isolation Analysis and Optimization for Mechanical Systems

Authors: Ozan Yavuz Baytemir, Ender Cigeroglu, Gokhan Osman Ozgen

Abstract:

Vibration is an important issue in the design of various components of aerospace, marine and vehicular applications. In order not to lose the components’ function and operational performance, vibration isolation design involving the optimum isolator properties selection and isolator positioning processes appear to be a critical study. Knowing the growing need for the vibration isolation system design, this paper aims to present two types of software capable of implementing modal analysis, response analysis for both random and harmonic types of excitations, static deflection analysis, Monte Carlo simulations in addition to study of parameter and location optimization for different types of isolation problem scenarios. Investigating the literature, there is no such study developing a software-based tool that is capable of implementing all those analysis, simulation and optimization studies in one platform simultaneously. In this paper, the theoretical system model is generated for a 6-DOF rigid body. The vibration isolation system of any mechanical structure is able to be optimized using hybrid method involving both global search and gradient-based methods. Defining the optimization design variables, different types of optimization scenarios are listed in detail. Being aware of the need for a user friendly vibration isolation problem solver, two types of graphical user interfaces (GUIs) are prepared and verified using a commercial finite element analysis program, Ansys Workbench 14.0. Using the analysis and optimization capabilities of those GUIs, a real application used in an air-platform is also presented as a case study at the end of the paper.

Keywords: hybrid optimization, Monte Carlo simulation, multi-degree-of-freedom system, parameter optimization, location optimization, passive vibration isolation analysis

Procedia PDF Downloads 542
13427 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II

Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang

Abstract:

To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.

Keywords: waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II

Procedia PDF Downloads 200
13426 Experimental Design for Formulation Optimization of Nanoparticle of Cilnidipine

Authors: Arti Bagada, Kantilal Vadalia, Mihir Raval

Abstract:

Cilnidipine is practically insoluble in water which results in its insufficient oral bioavailability. The purpose of the present investigation was to formulate cilnidipine nanoparticles by nanoprecipitation method to increase the aqueous solubility and dissolution rate and hence bioavailability by utilizing various experimental statistical design modules. Experimental design were used to investigate specific effects of independent variables during preparation cilnidipine nanoparticles and corresponding responses in optimizing the formulation. Plackett Burman design for independent variables was successfully employed for optimization of nanoparticles of cilnidipine. The influence of independent variables studied were drug concentration, solvent to antisolvent ratio, polymer concentration, stabilizer concentration and stirring speed. The dependent variables namely average particle size, polydispersity index, zeta potential value and saturation solubility of the formulated nanoparticles of cilnidipine. The experiments were carried out according to 13 runs involving 5 independent variables (higher and lower levels) employing Plackett-Burman design. The cilnidipine nanoparticles were characterized by average particle size, polydispersity index value, zeta potential value and saturation solubility and it results were 149 nm, 0.314, 43.24 and 0.0379 mg/ml, respectively. The experimental results were good correlated with predicted data analysed by Plackett-Burman statistical method.

Keywords: dissolution enhancement, nanoparticles, Plackett-Burman design, nanoprecipitation

Procedia PDF Downloads 139
13425 Application of the Global Optimization Techniques to the Optical Thin Film Design

Authors: D. Li

Abstract:

Optical thin films are used in a wide variety of optical components and there are many software tools programmed for advancing multilayer thin film design. The available software packages for designing the thin film structure may not provide optimum designs. Normally, almost all current software programs obtain their final designs either from optimizing a starting guess or by technique, which may or may not involve a pseudorandom process, that give different answers every time, depending upon the initial conditions. With the increasing power of personal computers, functional methods in optimization and synthesis of optical multilayer systems have been developed such as DGL Optimization, Simulated Annealing, Genetic Algorithms, Needle Optimization, Inductive Optimization and Flip-Flop Optimization. Among these, DGL Optimization has proved its efficiency in optical thin film designs. The application of the DGL optimization technique to the design of optical coating is presented. A DGL optimization technique is provided, and its main features are discussed. Guidelines on the application of the DGL optimization technique to various types of design problems are given. The innovative global optimization strategies used in a software tool, OnlyFilm, to optimize multilayer thin film designs through different filter designs are outlined. OnlyFilm is a powerful, versatile, and user-friendly thin film software on the market, which combines optimization and synthesis design capabilities with powerful analytical tools for optical thin film designers. It is also the only thin film design software that offers a true global optimization function.

Keywords: optical coatings, optimization, design software, thin film design

Procedia PDF Downloads 289
13424 Study of the Effect of Inclusion of TiO2 in Active Flux on Submerged Arc Welding of Low Carbon Mild Steel Plate and Parametric Optimization of the Process by Using DEA Based Bat Algorithm

Authors: Sheetal Kumar Parwar, J. Deb Barma, A. Majumder

Abstract:

Submerged arc welding is a very complex process. It is a very efficient and high performance welding process. In this present study an attempt have been done to reduce the welding distortion by increased amount of oxide flux through TiO2 in submerged arc welding process. Care has been taken to avoid the excessiveness of the adding agent for attainment of significant results. Data Envelopment Analysis (DEA) based BAT algorithm is used for the parametric optimization purpose in which DEA Data Envelopment Analysis is used to convert multi response parameters into a single response parameter. The present study also helps to know the effectiveness of the addition of TiO2 in active flux during submerged arc welding process.

Keywords: BAT algorithm, design of experiment, optimization, submerged arc welding

Procedia PDF Downloads 610
13423 Multi Universe Existence Based-On Quantum Relativity using DJV Circuit Experiment Interpretation

Authors: Muhammad Arif Jalil, Somchat Sonasang, Preecha Yupapin

Abstract:

This study hypothesizes that the universe is at the center of the universe among the white and black holes, which are the entangled pairs. The coupling between them is in terms of spacetime forming the universe and things. The birth of things is based on exchange energy between the white and black sides. That is, the transition from the white side to the black side is called wave-matter, where it has a speed faster than light with positive gravity. The transition from the black to the white side has a speed faster than light with negative gravity called a wave-particle. In the part where the speed is equal to light, the particle rest mass is formed. Things can appear to take shape here. Thus, the gravity is zero because it is the center. The gravitational force belongs to the Earth itself because it is in a position that is twisted towards the white hole. Therefore, it is negative. The coupling of black-white holes occurs directly on both sides. The mass is formed at the saturation and will create universes and other things. Therefore, it can be hundreds of thousands of universes on both sides of the B and white holes before reaching the saturation point of multi-universes. This work will use the DJV circuit that the research team made as an entangled or two-level system circuit that has been experimentally demonstrated. Therefore, this principle has the possibility for interpretation. This work explains the emergence of multiple universes and can be applied as a practical guideline for searching for universes in the future. Moreover, the results indicate that the DJV circuit can create the elementary particles according to Feynman's diagram with rest mass conditions, which will be discussed for fission and fusion applications.

Keywords: multi-universes, feynman diagram, fission, fusion

Procedia PDF Downloads 41
13422 Optimization of Interface Radio of Universal Mobile Telecommunication System Network

Authors: O. Mohamed Amine, A. Khireddine

Abstract:

Telecoms operators are always looking to meet their share of the other customers, they try to gain optimum utilization of the deployed equipment and network optimization has become essential. This project consists of optimizing UMTS network, and the study area is an urban area situated in the center of Algiers. It was initially questions to become familiar with the different communication systems (3G) and the optimization technique, its main components, and its fundamental characteristics radios were introduced.

Keywords: UMTS, UTRAN, WCDMA, optimization

Procedia PDF Downloads 351
13421 Numerical Modal Analysis of a Multi-Material 3D-Printed Composite Bushing and Its Application

Authors: Paweł Żur, Alicja Żur, Andrzej Baier

Abstract:

Modal analysis is a crucial tool in the field of engineering for understanding the dynamic behavior of structures. In this study, numerical modal analysis was conducted on a multi-material 3D-printed composite bushing, which comprised a polylactic acid (PLA) outer shell and a thermoplastic polyurethane (TPU) flexible filling. The objective was to investigate the modal characteristics of the bushing and assess its potential for practical applications. The analysis involved the development of a finite element model of the bushing, which was subsequently subjected to modal analysis techniques. Natural frequencies, mode shapes, and damping ratios were determined to identify the dominant vibration modes and their corresponding responses. The numerical modal analysis provided valuable insights into the dynamic behavior of the bushing, enabling a comprehensive understanding of its structural integrity and performance. Furthermore, the study expanded its scope by investigating the entire shaft mounting of a small electric car, incorporating the 3D-printed composite bushing. The shaft mounting system was subjected to numerical modal analysis to evaluate its dynamic characteristics and potential vibrational issues. The results of the modal analysis highlighted the effectiveness of the 3D-printed composite bushing in minimizing vibrations and optimizing the performance of the shaft mounting system. The findings contribute to the broader field of composite material applications in automotive engineering and provide valuable insights for the design and optimization of similar components.

Keywords: 3D printing, composite bushing, modal analysis, multi-material

Procedia PDF Downloads 65
13420 Estimation of Scour Using a Coupled Computational Fluid Dynamics and Discrete Element Model

Authors: Zeinab Yazdanfar, Dilan Robert, Daniel Lester, S. Setunge

Abstract:

Scour has been identified as the most common threat to bridge stability worldwide. Traditionally, scour around bridge piers is calculated using the empirical approaches that have considerable limitations and are difficult to generalize. The multi-physic nature of scouring which involves turbulent flow, soil mechanics and solid-fluid interactions cannot be captured by simple empirical equations developed based on limited laboratory data. These limitations can be overcome by direct numerical modeling of coupled hydro-mechanical scour process that provides a robust prediction of bridge scour and valuable insights into the scour process. Several numerical models have been proposed in the literature for bridge scour estimation including Eulerian flow models and coupled Euler-Lagrange models incorporating an empirical sediment transport description. However, the contact forces between particles and the flow-particle interaction haven’t been taken into consideration. Incorporating collisional and frictional forces between soil particles as well as the effect of flow-driven forces on particles will facilitate accurate modeling of the complex nature of scour. In this study, a coupled Computational Fluid Dynamics and Discrete Element Model (CFD-DEM) has been developed to simulate the scour process that directly models the hydro-mechanical interactions between the sediment particles and the flowing water. This approach obviates the need for an empirical description as the fundamental fluid-particle, and particle-particle interactions are fully resolved. The sediment bed is simulated as a dense pack of particles and the frictional and collisional forces between particles are calculated, whilst the turbulent fluid flow is modeled using a Reynolds Averaged Navier Stocks (RANS) approach. The CFD-DEM model is validated against experimental data in order to assess the reliability of the CFD-DEM model. The modeling results reveal the criticality of particle impact on the assessment of scour depth which, to the authors’ best knowledge, hasn’t been considered in previous studies. The results of this study open new perspectives to the scour depth and time assessment which is the key to manage the failure risk of bridge infrastructures.

Keywords: bridge scour, discrete element method, CFD-DEM model, multi-phase model

Procedia PDF Downloads 107
13419 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 116