Search results for: optimal smoothing parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5002

Search results for: optimal smoothing parameter

4162 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 255
4161 Vibration Behavior of Nanoparticle Delivery in a Single-Walled Carbon Nanotube Using Nonlocal Timoshenko Beam Theory

Authors: Haw-Long Lee, Win-Jin Chang, Yu-Ching Yang

Abstract:

In the paper, the coupled equation of motion for the dynamic displacement of a fullerene moving in a (10,10) single-walled carbon nanotube (SWCNT) is derived using nonlocal Timoshenko beam theory, including the effects of rotary inertia and shear deformation. The effects of confined stiffness between the fullerene and nanotube, foundation stiffness, and nonlocal parameter on the dynamic behavior are analyzed using the Runge-Kutta Method. The numerical solution is in agreement with the analytical result for the special case. The numerical results show that increasing the confined stiffness and foundation stiffness decrease the dynamic displacement of SWCNT. However, the dynamic displacement increases with increasing the nonlocal parameter. In addition, result using the Euler beam theory and the Timoshenko beam theory are compared. It can be found that ignoring the effects of rotary inertia and shear deformation leads to an underestimation of the displacement.

Keywords: single-walled carbon nanotube, nanoparticle delivery, Nonlocal Timoshenko beam theory, Runge-Kutta Method, Van der Waals force

Procedia PDF Downloads 376
4160 Optimization of Passive Vibration Damping of Space Structures

Authors: Emad Askar, Eldesoky Elsoaly, Mohamed Kamel, Hisham Kamel

Abstract:

The objective of this article is to improve the passive vibration damping of solar array (SA) used in space structures, by the effective application of numerical optimization. A case study of a SA is used for demonstration. A finite element (FE) model was created and verified by experimental testing. Optimization was then conducted by implementing the FE model with the genetic algorithm, to find the optimal placement of aluminum circular patches, to suppress the first two bending mode shapes. The results were verified using experimental testing. Finally, a parametric study was conducted using the FE model where patch locations, material type, and shape were varied one at a time, and the results were compared with the optimal ones. The results clearly show that through the proper application of FE modeling and numerical optimization, passive vibration damping of space structures has been successfully achieved.

Keywords: damping optimization, genetic algorithm optimization, passive vibration damping, solar array vibration damping

Procedia PDF Downloads 447
4159 Empirical Green’s Function Technique for Accelerogram Synthesis: The Problem of the Use for Marine Seismic Hazard Assessment

Authors: Artem A. Krylov

Abstract:

Instrumental seismological researches in water areas are complicated and expensive, that leads to the lack of strong motion records in most offshore regions. In the same time the number of offshore industrial infrastructure objects, such as oil rigs, subsea pipelines, is constantly increasing. The empirical Green’s function technique proved to be very effective for accelerograms synthesis under the conditions of poorly described seismic wave propagation medium. But the selection of suitable small earthquake record in offshore regions as an empirical Green’s function is a problem because of short seafloor instrumental seismological investigation results usually with weak micro-earthquakes recordings. An approach based on moving average smoothing in the frequency domain is presented for preliminary processing of weak micro-earthquake records before using it as empirical Green’s function. The method results in significant waveform correction for modeled event. The case study for 2009 L’Aquila earthquake was used to demonstrate the suitability of the method. This work was supported by the Russian Foundation of Basic Research (project № 18-35-00474 mol_a).

Keywords: accelerogram synthesis, empirical Green's function, marine seismology, microearthquakes

Procedia PDF Downloads 321
4158 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 127
4157 Optimization of High Flux Density Design for Permanent Magnet Motor

Authors: Dong-Woo Kang

Abstract:

This paper presents an optimal magnet shape of a spoke-shaped interior permanent magnet synchronous motor by using ferrite magnets. Generally, the permanent magnet motor used the ferrite magnets has lower output power and efficiency than a rare-earth magnet motor, because the ferrite magnet has lower magnetic energy than the rare-earth magnet. Nevertheless, the ferrite magnet motor is used to many industrial products owing to cost effectiveness. In this paper, the authors propose a high power density design of the ferrite permanent magnet synchronous motor. Furthermore, because the motor design has to be taken a manufacturing process into account, the design is simulated by using the finite element method for analyzing the demagnetization, the magnetizing, and the structure stiffness. Especially, the magnet shape and dimensions are decided for satisfying these properties. Finally, the authors design an optimal motor for applying our system. That final design is manufactured and evaluated from experimentations.

Keywords: demagnetization, design optimization, magnetic analysis, permanent magnet motors

Procedia PDF Downloads 373
4156 A Method of Effective Planning and Control of Industrial Facility Energy Consumption

Authors: Aleksandra Aleksandrovna Filimonova, Lev Sergeevich Kazarinov, Tatyana Aleksandrovna Barbasova

Abstract:

A method of effective planning and control of industrial facility energy consumption is offered. The method allows to optimally arrange the management and full control of complex production facilities in accordance with the criteria of minimal technical and economic losses at the forecasting control. The method is based on the optimal construction of the power efficiency characteristics with the prescribed accuracy. The problem of optimal designing of the forecasting model is solved on the basis of three criteria: maximizing the weighted sum of the points of forecasting with the prescribed accuracy; the solving of the problem by the standard principles at the incomplete statistic data on the basis of minimization of the regularized function; minimizing the technical and economic losses due to the forecasting errors.

Keywords: energy consumption, energy efficiency, energy management system, forecasting model, power efficiency characteristics

Procedia PDF Downloads 388
4155 On the Role of Cutting Conditions on Surface Roughness in High-Speed Thread Milling of Brass C3600

Authors: Amir Mahyar Khorasani, Ian Gibson, Moshe Goldberg, Mohammad Masoud Movahedi, Guy Littlefair

Abstract:

One of the important factors in manufacturing processes especially machining operations is surface quality. Improving this parameter results in improving fatigue strength, corrosion resistance, creep life and surface friction. The reliability and clearance of removable joints such as thread and nuts are highly related to the surface roughness. In this work, the effect of different cutting parameters such as cutting fluid pressure, feed rate and cutting speed on the surface quality of the crest of thread in the high-speed milling of Brass C3600 have been determined. Two popular neural networks containing MLP and RBF coupling with Taguchi L32 have been used to model surface roughness which was shown to be highly adept for such tasks. The contribution of this work is modelling surface roughness on the crest of the thread by using precise profilometer with nanoscale resolution. Experimental tests have been carried out for validation and approved suitable accuracy of the proposed model. Also analysing the interaction of parameters two by two showed that the most effective cutting parameter on the surface value is feed rate followed by cutting speed and cutting fluid pressure.

Keywords: artificial neural networks, cutting conditions, high-speed machining, surface roughness, thread milling

Procedia PDF Downloads 374
4154 Failure Analysis of Electrode, Nozzle Plate, and Powder Injector during Air Plasma Spray Coating

Authors: Nemes Alexandra

Abstract:

The aim of the research is to develop an optimum microstructure of steel coatings on aluminum surfaces for application on the crankcase cylinder bores. For the proper design of the microstructure of the coat, it is important to control the plasma gun unit properly. The maximum operating time was determined while the plasma gun could optimally work before its destruction. Objectives: The aim of the research is to determine the optimal operating time of the plasma gun between renovations (the renovation shall involve the replacement of the test components of the plasma gun: electrode, nozzle plate, powder injector. Methodology: Plasma jet and particle flux analysis with PFI (PFI is a diagnostic tool for all kinds of thermal spraying processes), CT reconstruction and analysis on the new and the used plasma guns, failure analysis of electrodes, nozzle plates, and powder injectors, microscopic examination of the microstructure of the coating. Contributions: As the result of the failure analysis detailed above, the use of the plasma gun was maximized at 100 operating hours in order to get optimal microstructure for the coat.

Keywords: APS, air plasma spray, failure analysis, electrode, nozzle plate, powder injector

Procedia PDF Downloads 117
4153 Optimal Design of the Power Generation Network in California: Moving towards 100% Renewable Electricity by 2045

Authors: Wennan Long, Yuhao Nie, Yunan Li, Adam Brandt

Abstract:

To fight against climate change, California government issued the Senate Bill No. 100 (SB-100) in 2018 September, which aims at achieving a target of 100% renewable electricity by the end of 2045. A capacity expansion problem is solved in this case study using a binary quadratic programming model. The optimal locations and capacities of the potential renewable power plants (i.e., solar, wind, biomass, geothermal and hydropower), the phase-out schedule of existing fossil-based (nature gas) power plants and the transmission of electricity across the entire network are determined with the minimal total annualized cost measured by net present value (NPV). The results show that the renewable electricity contribution could increase to 85.9% by 2030 and reach 100% by 2035. Fossil-based power plants will be totally phased out around 2035 and solar and wind will finally become the most dominant renewable energy resource in California electricity mix.

Keywords: 100% renewable electricity, California, capacity expansion, mixed integer non-linear programming

Procedia PDF Downloads 168
4152 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 125
4151 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 339
4150 Identifying Dominant Anaerobic Microorganisms for Degradation of Benzene

Authors: Jian Peng, Wenhui Xiong, Zheng Lu

Abstract:

An optimal recipe of amendment (nutrients and electron acceptors) was developed and dominant indigenous benzene-degrading microorganisms were characterized in this study. Lessons were learnt from the development of the optimal amendment recipe: (1) salinity and substantial initial concentration of benzene were detrimental for benzene biodegradation; (2) large dose of amendments can shorten the lag time for benzene biodegradation occurrence; (3) toluene was an essential co-substance for promoting benzene degradation activity. The stable isotope probing study identified incorporation 13C from 13C-benzene into microorganisms, which can be considered as a direct evidence of the occurrence of benzene biodegradation. The dominant mechanism for benzene removal was identified by quantitative polymerase chain reaction analysis to be nitrate reduction. Microbial analyses (denaturing gradient gel electrophoresis and 16S ribosomal RNA) demonstrated that members of genus Dokdonella spp., Pusillimonas spp., and Advenella spp. were predominant within the microbial community and involved in the anaerobic benzene bioremediation.

Keywords: benzene, enhanced anaerobic bioremediation, stable isotope probing, biosep biotrap

Procedia PDF Downloads 338
4149 Elephant Herding Optimization for Service Selection in QoS-Aware Web Service Composition

Authors: Samia Sadouki Chibani, Abdelkamel Tari

Abstract:

Web service composition combines available services to provide new functionality. Given the number of available services with similar functionalities and different non functional aspects (QoS), the problem of finding a QoS-optimal web service composition is considered as an optimization problem belonging to NP-hard class. Thus, an optimal solution cannot be found by exact algorithms within a reasonable time. In this paper, a meta-heuristic bio-inspired is presented to address the QoS aware web service composition; it is based on Elephant Herding Optimization (EHO) algorithm, which is inspired by the herding behavior of elephant group. EHO is characterized by a process of dividing and combining the population to sub populations (clan); this process allows the exchange of information between local searches to move toward a global optimum. However, with Applying others evolutionary algorithms the problem of early stagnancy in a local optimum cannot be avoided. Compared with PSO, the results of experimental evaluation show that our proposition significantly outperforms the existing algorithm with better performance of the fitness value and a fast convergence.

Keywords: bio-inspired algorithms, elephant herding optimization, QoS optimization, web service composition

Procedia PDF Downloads 324
4148 Estimating the Properties of Polymer Concrete Using the Response Surface Method

Authors: Oguz Ugurkan Akkaya, Alpaslan Sipahi, Ozgur Firat Pamukcu, Murat Yasar, Tolga Guler, Arif Ulu, Ferit Cakir

Abstract:

With the increase in human population, expansion, and renovation of cities, infrastructure systems today need to be manufactured to be more durable and long-lasting. The most cost-effective and durable manufacturing of components is a general problem of all engineering disciplines. Therefore, it is important to determine the most optimal components. This study mainly focuses on the most optimal component design of the polymer concrete. For this purpose, the lower and upper limits of the three main components of the polymer concrete are determined. The effects of these three principal components on the compressive strength, tensile strength, and unit price of polymer concrete are estimated using the response surface method. Box-Behnken Design is used in designing the experiments. Compressive strength, tensile strength, and unit prices are successfully estimated with variance ratios (R²) of 0.82, 0.92, and 0.90, respectively, and the optimum mixture quantity is determined.

Keywords: Box-Behnken Design, compressive strength, mechanical tests, polymer concrete, tensile strength

Procedia PDF Downloads 168
4147 Ta-DAH: Task Driven Automated Hardware Design of Free-Flying Space Robots

Authors: Lucy Jackson, Celyn Walters, Steve Eckersley, Mini Rai, Simon Hadfield

Abstract:

Space robots will play an integral part in exploring the universe and beyond. A correctly designed space robot will facilitate OOA, satellite servicing and ADR. However, problems arise when trying to design such a system as it is a highly complex multidimensional problem into which there is little research. Current design techniques are slow and specific to terrestrial manipulators. This paper presents a solution to the slow speed of robotic hardware design, and generalizes the technique to free-flying space robots. It presents Ta-DAH Design, an automated design approach that utilises a multi-objective cost function in an iterative and automated pipeline. The design approach leverages prior knowledge and facilitates the faster output of optimal designs. The result is a system that can optimise the size of the base spacecraft, manipulator and some key subsystems for any given task. Presented in this work is the methodology behind Ta-DAH Design and a number optimal space robot designs.

Keywords: space robots, automated design, on-orbit operations, hardware design

Procedia PDF Downloads 68
4146 Fault-Tolerant Predictive Control for Polytopic LPV Systems Subject to Sensor Faults

Authors: Sofiane Bououden, Ilyes Boulkaibet

Abstract:

In this paper, a robust fault-tolerant predictive control (FTPC) strategy is proposed for systems with linear parameter varying (LPV) models and input constraints subject to sensor faults. Generally, virtual observers are used for improving the observation precision and reduce the impacts of sensor faults and uncertainties in the system. However, this type of observer lacks certain system measurements which substantially reduce its accuracy. To deal with this issue, a real observer is then designed based on the virtual observer, and consequently a real observer-based robust predictive control is designed for polytopic LPV systems. Moreover, the proposed observer can entirely assure that all system states and sensor faults are estimated. As a result, and based on both observers, a robust fault-tolerant predictive control is then established via the Lyapunov method where sufficient conditions are proposed, for stability analysis and control purposes, in linear matrix inequalities (LMIs) form. Finally, simulation results are given to show the effectiveness of the proposed approach.

Keywords: linear parameter varying systems, fault-tolerant predictive control, observer-based control, sensor faults, input constraints, linear matrix inequalities

Procedia PDF Downloads 195
4145 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model

Authors: Didier Auroux, Vladimir Groza

Abstract:

This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.

Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization

Procedia PDF Downloads 314
4144 Optimal Power Distribution and Power Trading Control among Loads in a Smart Grid Operated Industry

Authors: Vivek Upadhayay, Siddharth Deshmukh

Abstract:

In recent years utilization of renewable energy sources has increased majorly because of the increase in global warming concerns. Organization these days are generally operated by Micro grid or smart grid on a small level. Power optimization and optimal load tripping is possible in a smart grid based industry. In any plant or industry loads can be divided into different categories based on their importance to the plant and power requirement pattern in the working days. Coming up with an idea to divide loads in different such categories and providing different power management algorithm to each category of load can reduce the power cost and can come handy in balancing stability and reliability of power. An objective function is defined which is subjected to a variable that we are supposed to minimize. Constraint equations are formed taking difference between the power usages pattern of present day and same day of previous week. By considering the objectives of minimal load tripping and optimal power distribution the proposed problem formulation is a multi-object optimization problem. Through normalization of each objective function, the multi-objective optimization is transformed to single-objective optimization. As a result we are getting the optimized values of power required to each load for present day by use of the past values of the required power for the same day of last week. It is quite a demand response scheduling of power. These minimized values then will be distributed to each load through an algorithm used to optimize the power distribution at a greater depth. In case of power storage exceeding the power requirement, profit can be made by selling exceeding power to the main grid.

Keywords: power flow optimization, power trading enhancement, smart grid, multi-object optimization

Procedia PDF Downloads 520
4143 Development of an Efficient Algorithm for Cessna Citation X Speed Optimization in Cruise

Authors: Georges Ghazi, Marc-Henry Devillers, Ruxandra M. Botez

Abstract:

Aircraft flight trajectory optimization has been identified to be a promising solution for reducing both airline costs and the aviation net carbon footprint. Nowadays, this role has been mainly attributed to the flight management system. This system is an onboard multi-purpose computer responsible for providing the crew members with the optimized flight plan from a destination to the next. To accomplish this function, the flight management system uses a variety of look-up tables to compute the optimal speed and altitude for each flight regime instantly. Because the cruise is the longest segment of a typical flight, the proposed algorithm is focused on minimizing fuel consumption for this flight phase. In this paper, a complete methodology to estimate the aircraft performance and subsequently compute the optimal speed in cruise is presented. Results showed that the obtained performance database was accurate enough to predict the flight costs associated with the cruise phase.

Keywords: Cessna Citation X, cruise speed optimization, flight cost, cost index, and golden section search

Procedia PDF Downloads 288
4142 Effect of Varying Zener-Hollomon Parameter (Temperature and Flow Stress) and Stress Relaxation on Creep Response of Hot Deformed AA3104 Can Body Stock

Authors: Oyindamola Kayode, Sarah George, Roberto Borrageiro, Mike Shirran

Abstract:

A phenomenon identified by our industrial partner has experienced sag on AA3104 can body stock (CBS) transfer bar during transportation of the slab from the breakdown mill to the finishing mill. Excessive sag results in bottom scuffing of the slab onto the roller table, resulting in surface defects on the final product. It has been found that increasing the strain rate on the breakdown mill final pass results in a slab resistant to sag. The creep response for materials hot deformed at different Zener–Holloman parameter values needs to be evaluated experimentally to gain better understanding of the operating mechanism. This study investigates this identified phenomenon through laboratory simulation of the breakdown mill conditions for various strain rates by utilizing the Gleeble at UCT Centre for Materials Engineering. The experiment will determine the creep response for a range of conditions as well as quantifying the associated material microstructure (sub-grain size, grain structure etc). The experimental matrices were determined based on experimental conditions approximate to industrial hot breakdown rolling and carried out on the Gleeble 3800 at the Centre for Materials Engineering, University of Cape Town. Plane strain compression samples were used for this series of tests at an applied load that allow for better contact and exaggerated creep displacement. A tantalum barrier layer was used for increased conductivity and decreased risk of anvil welding. One set of tests with no in-situ hold time was performed, where the samples were quenched after deformation. The samples were retained for microstructure analysis of the micrographs from the light microscopy (LM), quantitative data and images from scanning electron microscopy (SEM) and energy dispersive X-ray (EDX), sub-grain size and grain structure from electron back scattered diffraction (EBSD).

Keywords: aluminium alloy, can-body stock, hot rolling, creep response, Zener-Hollomon parameter

Procedia PDF Downloads 84
4141 Non-Pharmacological Approach to the Improvement and Maintenance of the Convergence Parameter

Authors: Andreas Aceranti, Guido Bighiani, Francesca Crotto, Marco Colorato, Stefania Zaghi, Marino Zanetti, Simonetta Vernocchi

Abstract:

The management of eye parameters such as convergence, accommodation, and miosis is very complex; in fact, both the neurovegetative system and the complex Oculocephalgiria system come into play. We have found the effectiveness of the "highvelocity low amplitude" technique directed on C7-T1 (where the cilio-spinal nucleus of the budge is located) in improving the convergence parameter through the measurement of the point of maximum convergence. With this research, we set out to investigate whether the improvement obtained through the High Velocity Low Amplitude maneuver lasts over time, carrying out a pre-manipulation measurement, one immediately after manipulation and one month after manipulation. We took a population of 30 subjects with both refractive and non-refractive problems. Of the 30 patients tested, 27 gave a positive result after the High Velocity Low Amplitude maneuver, giving an improvement in the point of maximum convergence. After a month, we retested all 27 subjects: some further improved the result, others kept, and three subjects slightly lost the gain obtained. None of the re-tested patients returned to the point of maximum convergence starting pre-manipulation. This result opens the door to a multidisciplinary approach between ophthalmologists and osteopaths with the aim of addressing oculomotricity and convergence deficits that increasingly afflict our society due to the massive use of devices and for the conduct of life in closed and restricted environments.

Keywords: point of maximum convergence, HVLA, improvement in PPC, convergence

Procedia PDF Downloads 73
4140 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 68
4139 Thermodynamics of Random Copolymers in Solution

Authors: Maria Bercea, Bernhard A. Wolf

Abstract:

The thermodynamic behavior for solutions of poly (methyl methacrylate-ran-t-butyl methacrylate) of variable composition as compared with the corresponding homopolymers was investigated by light scattering measurements carried out for dilute solutions and vapor pressure measurements of concentrated solutions. The complex dependencies of the Flory Huggins interaction parameter on concentration and copolymer composition in solvents of different polarity (toluene and chloroform) can be understood by taking into account the ability of the polymers to rearrange in a response to changes in their molecular surrounding. A recent unified thermodynamic approach was used for modeling the experimental data, being able to describe the behavior of the different solutions by means of two adjustable parameters, one representing the effective number of solvent segments and another one accounting for the interactions between the components. Thus, it was investigated how the solvent quality changes with the composition of the copolymers through the Gibbs energy of mixing as a function of polymer concentration. The largest reduction of the Gibbs energy at a given composition of the system was observed for the best solvent. The present investigation proves that the new unified thermodynamic approach is a general concept applicable to homo- and copolymers, independent of the chain conformation or shape, molecular and chemical architecture of the components and of other dissimilarities, such as electrical charges.

Keywords: random copolymers, Flory Huggins interaction parameter, Gibbs energy of mixing, chemical architecture

Procedia PDF Downloads 278
4138 Loss Minimization by Distributed Generation Allocation in Radial Distribution System Using Crow Search Algorithm

Authors: M. Nageswara Rao, V. S. N. K. Chaitanya, K. Amarendranath

Abstract:

This paper presents an optimal allocation and sizing of Distributed Generation (DG) in Radial Distribution Network (RDN) for total power loss minimization and enhances the voltage profile of the system. The two main important part of this study first is to find optimal allocation and second is optimum size of DG. The locations of DGs are identified by Analytical expressions and crow search algorithm has been employed to determine the optimum size of DG. In this study, the DG has been placed on single and multiple allocations.CSA is a meta-heuristic algorithm inspired by the intelligent behavior of the crows. Crows stores their excess food in different locations and memorizes those locations to retrieve it when it is needed. They follow each other to do thievery to obtain better food source. This analysis is tested on IEEE 33 bus and IEEE 69 bus under MATLAB environment and the results are compared with existing methods.

Keywords: analytical expression, distributed generation, crow search algorithm, power loss, voltage profile

Procedia PDF Downloads 232
4137 Effect of Variation of Injection Timing on Performance and Emission Characteristics of Compression Ignition Engine: A CFD Approach

Authors: N. Balamurugan, N. V. Mahalakshmi

Abstract:

Compression ignition (CI) engines are known for their high thermal efficiency in comparison with spark-ignited (SI) engines. This makes CI engines a potential candidate for the future prime source of power for transportation sector to reduce greenhouse gas emissions and to shrink carbon footprint. However, CI engines produce high levels of NOx and soot emissions. Conventional methods to reduce NOx and soot emissions often result in the infamous NOx-soot trade-off. The injection parameters are one of the most important factors in the working of CI engines. The engine performance, power output, economy etc., is greatly dependent on the effectiveness of the injection parameters. The injection parameter has their direct impact on combustion process and pollutant formation. The injection parameter’s values are required to be optimised according to the application of the engine. Control of fuel injection mode is one method for reduction of NOx and soot emissions that is achievable. This study aims to assess, compare and analyse the influence of the effect of injection characteristics that is SOI timing studied on combustion and emissions in in-cylinder combustion processes with that of conventional DI Diesel Engine system using the commercial Computational Fluid Dynamic (CFD) package STAR- CD ES-ICE.

Keywords: variation of injection timing, compression ignition engine, spark-ignited, Computational Fluid Dynamic

Procedia PDF Downloads 290
4136 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 158
4135 Parameter Measurement Systems to Evaluate Performance of Archers

Authors: Muhammad Zikril Hakim Md. Azizi, Norhafizan Ahmad, Raja Ariffin Raja Ghazilla

Abstract:

Postural stability, attention level of the archer and particularly the vibrations of the bow itself plays a prominent role in determining the athletes performance. Many techniques and systems had been developing to monitor the parameters of the archers during training. In Malaysia, archery coaches tend to use non-scientific ways that they are familiar with, to evaluate archer performance. An approach that provides more affordable yet accurate systems to the masses and relatively easy system deployment procedure need to be proposed. Hence, this project will address to fulfil the needs. Three area of the archer parameter were included for data monitoring sensors. Attention level can be measured using EEG sensor, centre of mass linked to the postural stability can be measured by foot pressure sensor, and the bow vibrations in three axis will be relayed by the vibrations sensors placed directly on the bow using wireless sensors. Arduino based microcontroller used to relay all the data back to the interfacing systems. Interface systems will be using Python language and C++ framework for user interface and hardware interfacing systems. All sensor data can be observed in real time using the in-house applications, and each sessions can be saved to common files so that coach and the team can have a further discussion and comparisons.

Keywords: archery, graphical user interface, microcontroller, wireless sensor, monitoring system

Procedia PDF Downloads 298
4134 Comparison of Two Theories for the Critical Laser Radius in Thermal Quantum Plasma

Authors: Somaye Zare

Abstract:

The critical beam radius is a significant factor that predicts the behavior of the laser beam in the plasma, so if the laser beam radius is adequately greater in comparison to it, the beam will experience stable focusing on the plasma; otherwise, the beam will diverge after entering into the plasma. In this work, considering the paraxial approximation and moment theories, the localization of a relativistic laser beam in thermal quantum plasma is investigated. Using the dielectric function obtained in the quantum hydrodynamic model, the mathematical equation for the laser beam width parameter is attained and solved numerically by the fourth-order Runge-Kutta method. The results demonstrate that the stouter focusing effect is occurred in the moment theory compared to the paraxial approximation. Besides, similar to the two theories, with increasing Fermi temperature, plasma density, and laser intensity, the oscillation rate of the beam width parameter growths and focusing length reduces which means improving the focusing effect. Furthermore, it is understood that behaviors of the critical laser radius are different in the two theories, in the paraxial approximation, the critical radius after a minimum value is enhanced with increasing laser intensity, but in the moment theory, with increasing laser intensity, the critical radius decreases until it becomes independent of the laser intensity.

Keywords: laser localization, quantum plasma, paraxial approximation, moment theory, quantum hydrodynamic model

Procedia PDF Downloads 68
4133 On a Transient Magnetohydrodynamics Heat Transfer Within Radiative Porous Channel Due to Convective Boundary Condition

Authors: Bashiru Abdullahi, Isah Bala Yabo, Ibrahim Yakubu Seini

Abstract:

In this paper, the steady/transient MHD heat transfer within radiative porous channel due to convective boundary conditions is considered. The solution of the steady-state and that of the transient version were conveyed by Perturbation and Finite difference methods respectively. The heat transfer mechanism of the present work ascertains the influence of Biot number〖(B〗_i1), magnetizing parameter (M), radiation parameter(R), temperature difference, suction/injection(S) Grashof number (Gr) and time (t) on velocity (u), temperature(θ), skin friction(τ), and Nusselt number (Nu). The results established were discussed with the help of a line graph. It was found that the velocity, temperature, and skin friction decay with increasing suction/injection and magnetizing parameters while the Nusselt number upsurges with suction/injection at y = 0 and falls at y =1. The steady-state solution was in perfect agreement with the transient version for a significant value of time t. It is interesting to report that the Biot number has a cogent influence consequently, as its values upsurge the result of the present work slant the extended literature.

Keywords: heat transfer, thermal radiation, porous channel, MHD, transient, convective boundary condition

Procedia PDF Downloads 118