Search results for: Cost optimization modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4134

Search results for: Cost optimization modelling

3264 Application of “Multiple Risk Communicator“ to the Personal Information Leakage Problem

Authors: Mitsuhiro Taniyama, Yuu Hidaka, Masato Arai, Satoshi Kai, Hiromi Igawa, Hiroshi Yajima, Ryoichi Sasaki

Abstract:

Along with the progress of our information society, various risks are becoming increasingly common, causing multiple social problems. For this reason, risk communications for establishing consensus among stakeholders who have different priorities have become important. However, it is not always easy for the decision makers to agree on measures to reduce risks based on opposing concepts, such as security, privacy and cost. Therefore, we previously developed and proposed the “Multiple Risk Communicator" (MRC) with the following functions: (1) modeling the support role of the risk specialist, (2) an optimization engine, and (3) displaying the computed results. In this paper, MRC program version 1.0 is applied to the personal information leakage problem. The application process and validation of the results are discussed.

Keywords: Decision Making, Personal Information Leakage Problem, Risk Communication, Risk Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
3263 A Cost Function for Joint Blind Equalization and Phase Recovery

Authors: Reza Berangi, Morteza Babaee, Majid Soleimanipour

Abstract:

In this paper a new cost function for blind equalization is proposed. The proposed cost function, referred to as the modified maximum normalized cumulant criterion (MMNC), is an extension of the previously proposed maximum normalized cumulant criterion (MNC). While the MNC requires a separate phase recovery system after blind equalization, the MMNC performs joint blind equalization and phase recovery. To achieve this, the proposed algorithm maximizes a cost function that considers both amplitude and phase of the equalizer output. The simulation results show that the proposed algorithm has an improved channel equalization effect than the MNC algorithm and simultaneously can correct the phase error that the MNC algorithm is unable to do. The simulation results also show that the MMNC algorithm has lower complexity than the MNC algorithm. Moreover, the MMNC algorithm outperforms the MNC algorithm particularly when the symbols block size is small.

Keywords: Blind equalization, maximum normalized cumulant criterion (MNC), intersymbol interference (ISI), modified MNC criterion (MMNC), phase recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1753
3262 Selection of an Optimum Configuration of Solar PV Array under Partial Shaded Condition Using Particle Swarm Optimization

Authors: R. Ramaprabha

Abstract:

This paper presents an extraction of maximum energy from Solar Photovoltaic Array (SPVA) under partial shaded conditions by optimum selection of array size using Particle Swarm Optimization (PSO) technique. In this paper a detailed study on the output reduction of different SPVA configurations under partial shaded conditions have been carried out. A generalized MATLAB M-code based software model has been used for any required array size, configuration, shading patterns and number of bypass diodes. Comparative study has been carried out on different configurations by testing several shading scenarios. While the number of shading patterns and the rate of change are very low for stationary SPVA but these may be quite large for SPVA mounted on a mobile platforms. This paper presents the suitability of PSO technique to select optimum configuration for mobile arrays by calculating the global peak (GP) of different configurations and to transfer maximum power to the load.

Keywords: Global peak, Mobile PV arrays, Partial shading, optimization, PSO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4213
3261 Optimization of Unweighted Minimum Vertex Cover

Authors: S. Balaji, V. Swaminathan, K. Kannan

Abstract:

The Minimum Vertex Cover (MVC) problem is a classic graph optimization NP - complete problem. In this paper a competent algorithm, called Vertex Support Algorithm (VSA), is designed to find the smallest vertex cover of a graph. The VSA is tested on a large number of random graphs and DIMACS benchmark graphs. Comparative study of this algorithm with the other existing methods has been carried out. Extensive simulation results show that the VSA can yield better solutions than other existing algorithms found in the literature for solving the minimum vertex cover problem.

Keywords: vertex cover, vertex support, approximation algorithms, NP - complete problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2463
3260 Design of a Low Cost Motion Data Acquisition Setup for Mechatronic Systems

Authors: Barış Can Yalçın

Abstract:

Motion sensors have been commonly used as a valuable component in mechatronic systems, however, many mechatronic designs and applications that need motion sensors cost enormous amount of money, especially high-tech systems. Design of a software for communication protocol between data acquisition card and motion sensor is another issue that has to be solved. This study presents how to design a low cost motion data acquisition setup consisting of MPU 6050 motion sensor (gyro and accelerometer in 3 axes) and Arduino Mega2560 microcontroller. Design parameters are calibration of the sensor, identification and communication between sensor and data acquisition card, interpretation of data collected by the sensor.

Keywords: Calibration of sensors, data acquisition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4320
3259 Obtaining Constants of Johnson-Cook Material Model Using a Combined Experimental, Numerical Simulation and Optimization Method

Authors: F. Rahimi Dehgolan, M. Behzadi, J. Fathi Sola

Abstract:

In this article, the Johnson-Cook material model’s constants for structural steel ST.37 have been determined by a method which integrates experimental tests, numerical simulation, and optimization. In the first step, a quasi-static test was carried out on a plain specimen. Next, the constants were calculated for it by minimizing the difference between the results acquired from the experiment and numerical simulation. Then, a quasi-static tension test was performed on three notched specimens with different notch radii. At last, in order to verify the results, they were used in numerical simulation of notched specimens and it was observed that experimental and simulation results are in good agreement. Changing the diameter size of the plain specimen in the necking area was set as the objective function in the optimization step. For final validation of the proposed method, diameter variation was considered as a parameter and its sensitivity to a change in any of the model constants was examined and the results were completely corroborating.

Keywords: Constants, Johnson-Cook material model, notched specimens, quasi-static test, sensitivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3594
3258 Scaling Strategy of a New Experimental Rig for Wheel-Rail Contact

Authors: Meysam Naeimi, Zili Li, Rolf Dollevoet

Abstract:

A new small–scale test rig developed for rolling contact fatigue (RCF) investigations in wheel–rail material. This paper presents the scaling strategy of the rig based on dimensional analysis and mechanical modelling. The new experimental rig is indeed a spinning frame structure with multiple wheel components over a fixed rail-track ring, capable of simulating continuous wheelrail contact in a laboratory scale. This paper describes the dimensional design of the rig, to derive its overall scaling strategy and to determine the key elements’ specifications. Finite element (FE) modelling is used to simulate the mechanical behavior of the rig with two sample scale factors of 1/5 and 1/7. The results of FE models are compared with the actual railway system to observe the effectiveness of the chosen scales. The mechanical properties of the components and variables of the system are finally determined through the design process.

Keywords: New test rig, rolling contact fatigue, rail, small scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2319
3257 Paremaeter Determination of a Vehicle 5-DOF Model to Simulate Occupant Deceleration in a Frontal Crash

Authors: Javad Marzbanrad, Mostafa Pahlavani

Abstract:

This study has investigated a vehicle Lumped Parameter Model (LPM) in frontal crash. There are several ways for determining spring and damper characteristics and type of problem shall be considered as system identification. This study use Genetic Algorithm (GA) procedure, being an effective procedure in case of optimization issues, for optimizing errors, between target data (experimental data) and calculated results (being obtained by analytical solving). In this study analyzed model in 5-DOF then compared our results with 5-DOF serial model. Finally, the response of model due to external excitement is investigated.

Keywords: Vehicle, Lumped-Parameter Model, GeneticAlgorithm, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
3256 Decolourization of Melanoidin Containing Wastewater Using South African Coal Fly Ash

Authors: V.O. Ojijo, M.S. Onyango, Aoyi Ochieng, F.A.O. Otieno

Abstract:

Batch adsorption of recalcitrant melanoidin using the abundantly available coal fly ash was carried out. It had low specific surface area (SBET) of 1.7287 m2/g and pore volume of 0.002245 cm3/g while qualitative evaluation of the predominant phases in it was done by XRD analysis. Colour removal efficiency was found to be dependent on various factors studied. Maximum colour removal was achieved around pH 6, whereas increasing sorbent mass from 10g/L to 200 g/L enhanced colour reduction from 25% to 86% at 298 K. Spontaneity of the process was suggested by negative Gibbs free energy while positive values for enthalpy change showed endothermic nature of the process. Non-linear optimization of error functions resulted in Freundlich and Redlich-Peterson isotherms describing sorption equilibrium data best. The coal fly ash had maximum sorption capacity of 53 mg/g and could thus be used as a low cost adsorbent in melanoidin removal.

Keywords: Adsorption, Isotherms, Melanoidin, South African coal fly ash.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2500
3255 Feedrate Optimization for Ball-end milling of Sculptured Surfaces using Fuzzy Logic Controller

Authors: Njiri J. G., Ikua B. W., Nyakoe G. N.

Abstract:

Optimization of cutting parameters important in precision machining in regards to efficiency and surface integrity of the machined part. Usually productivity and precision in machining is limited by the forces emanating from the cutting process. Due to the inherent varying nature of the workpiece in terms of geometry and material composition, the peak cutting forces vary from point to point during machining process. In order to increase productivity without compromising on machining accuracy, it is important to control these cutting forces. In this paper a fuzzy logic control algorithm is developed that can be applied in the control of peak cutting forces in milling of spherical surfaces using ball end mills. The controller can adaptively vary the feedrate to maintain allowable cutting force on the tool. This control algorithm is implemented in a computer numerical control (CNC) machine. It has been demonstrated that the controller can provide stable machining and improve the performance of the CNC milling process by varying feedrate.

Keywords: Ball-end mill, feedrate, fuzzy logic controller, machining optimization, spherical surface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2466
3254 An EOQ Model for Non-Instantaneous Deteriorating Items with Power Demand, Time Dependent Holding Cost, Partial Backlogging and Permissible Delay in Payments

Authors: M. Palanivel, R. Uthayakumar

Abstract:

In this paper, Economic Order Quantity (EOQ) based model for non-instantaneous Weibull distribution deteriorating items with power demand pattern is presented. In this model, the holding cost per unit of the item per unit time is assumed to be an increasing linear function of time spent in storage. Here the retailer is allowed a trade-credit offer by the supplier to buy more items. Also in this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This model aids in minimizing the total inventory cost by finding the optimal time interval and finding the optimal order quantity. The optimal solution of the model is illustrated with the help of numerical examples. Finally sensitivity analysis and graphical representations are given to demonstrate the model.

Keywords: Power demand pattern, Partial backlogging, Time dependent holding cost, Trade credit, Weibull deterioration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3066
3253 Dynamic Modelling and Virtual Simulation of Digital Duty-Cycle Modulation Control Drivers

Authors: J. Mbihi

Abstract:

This paper presents a dynamic architecture of digital duty-cycle modulation control drivers. Compared to most oversampling digital modulation schemes encountered in industrial electronics, its novelty is founded on a number of relevant merits including; embedded positive and negative feedback loops, internal modulation clock, structural simplicity, elementary building operators, no explicit need of samples of the nonlinear duty-cycle function when computing the switching modulated signal, and minimum number of design parameters. A prototyping digital control driver is synthesized and well tested within MATLAB/Simulink workspace. Then, the virtual simulation results and performance obtained under a sample of relevant instrumentation and control systems are presented, in order to show the feasibility, the reliability, and the versatility of target applications, of the proposed class of low cost and high quality digital control drivers in industrial electronics.

Keywords: Dynamic architecture, virtual simulation, duty-cycle modulation, digital control drivers, industrial electronics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1116
3252 Identification of an Mechanism Systems by Using the Modified PSO Method

Authors: Chih-Cheng Kao, Hsin- Hua Chu

Abstract:

This paper mainly proposes an efficient modified particle swarm optimization (MPSO) method, to identify a slidercrank mechanism driven by a field-oriented PM synchronous motor. In system identification, we adopt the MPSO method to find parameters of the slider-crank mechanism. This new algorithm is added with “distance" term in the traditional PSO-s fitness function to avoid converging to a local optimum. It is found that the comparisons of numerical simulations and experimental results prove that the MPSO identification method for the slider-crank mechanism is feasible.

Keywords: Slider-crank mechanism, distance, systemidentification, modified particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
3251 Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement

Authors: Ferinar Moaidi, Mahdi Moaidi

Abstract:

Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.

Keywords: Distributed generation, genetic algorithm, microgrid, load modelling, loss reduction, voltage improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
3250 Optimization of Structure of Section-Based Automated Lines

Authors: R. Usubamatov, M. Z. Abdulmuin

Abstract:

Automated production lines with so called 'hard structures' are widely used in manufacturing. Designers segmented these lines into sections by placing a buffer between the series of machine tools to increase productivity. In real production condition the capacity of a buffer system is limited and real production line can compensate only some part of the productivity losses of an automated line. The productivity of such production lines cannot be readily determined. This paper presents mathematical approach to solving the structure of section-based automated production lines by criterion of maximum productivity.

Keywords: optimization production line, productivity, sections

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312
3249 Evolutionary Query Optimization for Heterogeneous Distributed Database Systems

Authors: Reza Ghaemi, Amin Milani Fard, Hamid Tabatabaee, Mahdi Sadeghizadeh

Abstract:

Due to new distributed database applications such as huge deductive database systems, the search complexity is constantly increasing and we need better algorithms to speedup traditional relational database queries. An optimal dynamic programming method for such high dimensional queries has the big disadvantage of its exponential order and thus we are interested in semi-optimal but faster approaches. In this work we present a multi-agent based mechanism to meet this demand and also compare the result with some commonly used query optimization algorithms.

Keywords: Information retrieval systems, list fusion methods, document score, multi-agent systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3404
3248 Simulated Annealing and Genetic Algorithm in Telecommunications Network Planning

Authors: Aleksandar Tsenov

Abstract:

The main goal of this work is to propose a way for combined use of two nontraditional algorithms by solving topological problems on telecommunications concentrator networks. The algorithms suggested are the Simulated Annealing algorithm and the Genetic Algorithm. The Algorithm of Simulated Annealing unifies the well known local search algorithms. In addition - Simulated Annealing allows acceptation of moves in the search space witch lead to decisions with higher cost in order to attempt to overcome any local minima obtained. The Genetic Algorithm is a heuristic approach witch is being used in wide areas of optimization works. In the last years this approach is also widely implemented in Telecommunications Networks Planning. In order to solve less or more complex planning problem it is important to find the most appropriate parameters for initializing the function of the algorithm.

Keywords: Concentrator network, genetic algorithm, simulated annealing, UCPL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
3247 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.

Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
3246 Assessment of Pollution of the Rustavi City’s Atmosphere with Microaerosols

Authors: N. Gigauri, A. Surmava

Abstract:

According to observational data, experimental measurements and numerical modelling, the pollution of one of the industrial centers of Georgia, Rustavi City’s atmosphere with micro aerosols are assessed. Monthly, daily and hourly changes of the concentrations of PM2.5 and PM10 in the city atmosphere are analyzed. It is accepted that PM2.5 concentrations are always lower than PM10 concentrations, but their change curve is the same. In addition, it has been noted that the maximum concentrations of particles in the atmosphere of Rustavi city will be reached at any part of the day, which is determined by the total impact of the traffic flow and industrial facilities. Through numerical modelling, the influence of background western light air, gentle and fresh breeze on the distribution of particulate matter in the atmosphere was calculated. Calculations showed that background light air and gentle breeze lead to an increase the concentrations of microaerosols in the city's atmosphere, while fresh breeze contributes to the dispersion of dusty clouds. As a result, the level of dust in the city is decreasing, but the distribution area is expanding.

Keywords: Air pollution, numerical modeling, experimental measurement, PM2.5, PM10.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 131
3245 Heat Stress Monitor by Using Low-Cost Temperature and Humidity Sensors

Authors: Kiattisak Batsungnoen, Thanatchai Kulworawanichpong

Abstract:

The aim of this study is to develop a cost-effective WBGT heat stress monitor which provides precise heat stress measurement. The proposed device employs SHT15 and DS18B20 as a temperature and humidity sensors, respectively, incorporating with ATmega328 microcontroller. The developed heat stress monitor was calibrated and adjusted to that of the standard temperature and humidity sensors in the laboratory. The results of this study illustrated that the mean percentage error and the standard deviation from the measurement of the globe temperature was 2.33 and 2.71 respectively, while 0.94 and 1.02 were those of the dry bulb temperature, 0.79 and 0.48 were of the wet bulb temperature, and 4.46 and 1.60 were of the relative humidity sensor. This device is relatively low-cost and the measurement error is acceptable.

Keywords: Heat stress monitor, WBGT, Temperature and Humidity Sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
3244 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 641
3243 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation

Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke

Abstract:

Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.

Keywords: Automatic calibration framework, approximate Bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
3242 Robust Integrated Navigation of a Low Cost System

Authors: Saman M. Siddiqui, Fang Jiancheng

Abstract:

Robust nonlinear integrated navigation of GPS and low cost MEMS is a hot topic of research these days. A robust filter is required to cope up with the problem of unpredictable discontinuities and colored noises associated with low cost sensors. H∞ filter is previously used in Extended Kalman filter and Unscented Kalman filter frame. Unscented Kalman filter has a problem of Cholesky matrix factorization at each step which is a very unstable operation. To avoid this problem in this research H∞ filter is designed in Square root Unscented filter framework and found 50% more robust towards increased level of colored noises.

Keywords: H∞ filter, MEMS, GPS, Nonlinear system, robust system, Square root unscented filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
3241 Shape Optimization of Impeller Blades for a Bidirectional Axial Flow Pump using Polynomial Surrogate Model

Authors: I. S. Jung, W. H. Jung, S. H. Baek, S. Kang

Abstract:

This paper describes the shape optimization of impeller blades for a anti-heeling bidirectional axial flow pump used in ships. In general, a bidirectional axial pump has an efficiency much lower than the classical unidirectional pump because of the symmetry of the blade type. In this paper, by focusing on a pump impeller, the shape of blades is redesigned to reach a higher efficiency in a bidirectional axial pump. The commercial code employed in this simulation is CFX v.13. CFD result of pump torque, head, and hydraulic efficiency was compared. The orthogonal array (OA) and analysis of variance (ANOVA) techniques and surrogate model based optimization using orthogonal polynomial, are employed to determine the main effects and their optimal design variables. According to the optimal design, we confirm an effective design variable in impeller blades and explain the optimal solution, the usefulness for satisfying the constraints of pump torque and head.

Keywords: Bidirectional axial flow pump, Impeller blade, CFD, Analysis of variance, Polynomial surrogate model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3756
3240 Using FEM for Prediction of Thermal Post-Buckling Behavior of Thin Plates During Welding Process

Authors: Amin Esmaeilzadeh, Mohammad Sadeghi, Farhad Kolahan

Abstract:

Arc welding is an important joining process widely used in many industrial applications including production of automobile, ships structures and metal tanks. In welding process, the moving electrode causes highly non-uniform temperature distribution that leads to residual stresses and different deviations, especially buckling distortions in thin plates. In order to control the deviations and increase the quality of welded plates, a fixture can be used as a practical and low cost method with high efficiency. In this study, a coupled thermo-mechanical finite element model is coded in the software ANSYS to simulate the behavior of thin plates located by a 3-2-1 positioning system during the welding process. Computational results are compared with recent similar works to validate the finite element models. The agreement between the result of proposed model and other reported data proves that finite element modeling can accurately predict the behavior of welded thin plates.

Keywords: Welding, thin plate, buckling distortion, fixture locators, finite element modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2395
3239 Design Parameters Selection and Optimization of Weld Zone Development in Resistance Spot Welding

Authors: Norasiah Muhammad, Yupiter HP Manurung

Abstract:

This paper investigates the development of weld zone in Resistance Spot Welding (RSW) which focuses on weld nugget and Heat Affected Zone (HAZ). The effects of four factors namely weld current, weld time, electrode force and hold time were studied using a general 24 factorial design augmented by five centre points. The results of the analysis showed that all selected factors except hold time exhibit significant effect on weld nugget radius and HAZ size. Optimization of the welding parameters (weld current, weld time and electrode force) to normalize weld nugget and to minimize HAZ size was then conducted using Central Composite Design (CCD) in Response Surface Methodology (RSM) and the optimum parameters were determined. A regression model for radius of weld nugget and HAZ size was developed and its adequacy was evaluated. The experimental results obtained under optimum operating conditions were then compared with the predicted values and were found to agree satisfactorily with each other

Keywords: Factorial design, Optimization, Resistance Spot Welding (RSW), Response Surface Methodology (RSM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3394
3238 High Efficiency Class-F Power Amplifier Design

Authors: Abdalla Mohamed Eblabla

Abstract:

Due to the high increase in and demand for a wide assortment of applications that require low-cost, high-efficiency, and compact systems, RF power amplifiers are considered the most critical design blocks and power consuming components in wireless communication, TV transmission, radar, and RF heating. Therefore, much research has been carried out in order to improve the performance of power amplifiers. Classes-A, B, C, D, E and F are the main techniques for realizing power amplifiers.

An implementation of high efficiency class-F power amplifier with Gallium Nitride (GaN) High Electron Mobility Transistor (HEMT) was realized in this paper. The simulation and optimization of the class-F power amplifier circuit model was undertaken using Agilent’s Advanced Design system (ADS). The circuit was designed using lumped elements.

Keywords: Power Amplifier (PA), Gallium Nitride (GaN), Agilent’s Advanced Design system (ADS) and lumped elements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4135
3237 A Rapid and Cost-Effective Approach to Manufacturing Modeling Platform for Fused Deposition Modeling

Authors: Chil-Chyuan Kuo, Chen-Hsuan Tsai

Abstract:

This study presents a cost-effective approach for rapid fabricating modeling platforms utilized in fused deposition modeling system. A small-batch production of modeling platforms about 20 pieces can be obtained economically through silicone rubber mold using vacuum casting without applying the plastic injection molding. The air venting systems is crucial for fabricating modeling platform using vacuum casting. Modeling platforms fabricated can be used for building rapid prototyping model after sandblasting. This study offers industrial value because it has both time-effectiveness and cost-effectiveness.

Keywords: Vacuum casting, fused deposition modeling, modeling platform, sandblasting, surface roughness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2396
3236 Optimization of Soybean Oil by Modified Supercritical Carbon Dioxide

Authors: N. R. Putra, A. H. Abdul Aziz, A. S. Zaini, Z. Idham, F. Idrus, M. Z. Bin Zullyadini, M. A. Che Yunus

Abstract:

The content of omega-3 in soybean oil is important in the development of infants and is an alternative for the omega-3 in fish oils. The investigation of extraction of soybean oil is needed to obtain the bioactive compound in the extract. Supercritical carbon dioxide extraction is modern and green technology to extract herbs and plants to obtain high quality extract due to high diffusivity and solubility of the solvent. The aim of this study was to obtain the optimum condition of soybean oil extraction by modified supercritical carbon dioxide. The soybean oil was extracted by using modified supercritical carbon dioxide (SC-CO2) under the temperatures of 40, 60, 80 °C, pressures of 150, 250, 350 Bar, and constant flow-rate of 10 g/min as the parameters of extraction processes. An experimental design was performed in order to optimize three important parameters of SC-CO2 extraction which are pressure (X1), temperature (X2) to achieve optimum yields of soybean oil. Box Behnken Design was applied for experimental design. From the optimization process, the optimum condition of extraction of soybean oil was obtained at pressure 338 Bar and temperature 80 °C with oil yield of 2.713 g. Effect of pressure is significant on the extraction of soybean oil by modified supercritical carbon dioxide. Increasing of pressure will increase the oil yield of soybean oil.

Keywords: Soybean oil, SC-CO2 extraction, yield, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
3235 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chang, Chun-Lang, Liu, Chun-Kai

Abstract:

This study, for its research subjects, uses patients who had undergone total knee replacement surgery from the database of the National Health Insurance Administration. Through the review of literatures and the interviews with physicians, important factors are selected after careful screening. Then using Cross Entropy Method, Genetic Algorithm Logistic Regression, and Particle Swarm Optimization, the weight of each factor is calculated and obtained. In the meantime, Excel VBA and Case Based Reasoning are combined and adopted to evaluate the system. Results show no significant difference found through Genetic Algorithm Logistic Regression and Particle Swarm Optimization with over 97% accuracy in both methods. Both ROC areas are above 0.87. This study can provide critical reference to medical personnel as clinical assessment to effectively enhance medical care quality and efficiency, prevent unnecessary waste, and provide practical advantages to resource allocation to medical institutes.

Keywords: Total knee replacement, Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2022