Search results for: metaheuristic optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3284

Search results for: metaheuristic optimization

2174 Flood Planning Based on Risk Optimization: A Case Study in Phan-Calo River Basin in Vinh Phuc Province, Vietnam

Authors: Nguyen Quang Kim, Nguyen Thu Hien, Nguyen Thien Dung

Abstract:

Flood disasters are increasing worldwide in both frequency and magnitude. Every year in Vietnam, flood causes great damage to people, property, and environmental degradation. The flood risk management policy in Vietnam is currently updated. The planning of flood mitigation strategies is reviewed to make a decision how to reach sustainable flood risk reduction. This paper discusses the basic approach where the measures of flood protection are chosen based on minimizing the present value of expected monetary expenses, total residual risk and costs of flood control measures. This approach will be proposed and demonstrated in a case study for flood risk management in Vinh Phuc province of Vietnam. Research also proposed the framework to find a solution of optimal protection level and optimal measures of the flood. It provides an explicit economic basis for flood risk management plans and interactive effects of options for flood damage reduction. The results of the case study are demonstrated and discussed which would provide the processing of actions helped decision makers to choose flood risk reduction investment options.

Keywords: drainage plan, flood planning, flood risk, residual risk, risk optimization

Procedia PDF Downloads 246
2173 Optimization of Surface Roughness in Additive Manufacturing Processes via Taguchi Methodology

Authors: Anjian Chen, Joseph C. Chen

Abstract:

This paper studies a case where the targeted surface roughness of fused deposition modeling (FDM) additive manufacturing process is improved. The process is designing to reduce or eliminate the defects and improve the process capability index Cp and Cpk for an FDM additive manufacturing process. The baseline Cp is 0.274 and Cpk is 0.654. This research utilizes the Taguchi methodology, to eliminate defects and improve the process. The Taguchi method is used to optimize the additive manufacturing process and printing parameters that affect the targeted surface roughness of FDM additive manufacturing. The Taguchi L9 orthogonal array is used to organize the parameters' (four controllable parameters and one non-controllable parameter) effectiveness on the FDM additive manufacturing process. The four controllable parameters are nozzle temperature [°C], layer thickness [mm], nozzle speed [mm/s], and extruder speed [%]. The non-controllable parameter is the environmental temperature [°C]. After the optimization of the parameters, a confirmation print was printed to prove that the results can reduce the amount of defects and improve the process capability index Cp from 0.274 to 1.605 and the Cpk from 0.654 to 1.233 for the FDM additive manufacturing process. The final results confirmed that the Taguchi methodology is sufficient to improve the surface roughness of FDM additive manufacturing process.

Keywords: additive manufacturing, fused deposition modeling, surface roughness, six-sigma, Taguchi method, 3D printing

Procedia PDF Downloads 395
2172 Removal of Chromium (VI) from Aqueous Solution by Teff (Eragrostis Teff) Husk Activated Carbon: Optimization, Kinetics, Isotherm, and Practical Adaptation Study Using Response Surface Methodology

Authors: Tsegaye Adane Birhan

Abstract:

Recently, rapid industrialization has led to the excessive release of heavy metals such as Cr (VI) into the environment. Exposure to chromium (VI) can cause kidney and liver damage, depressed immune systems, and a variety of cancers. Therefore, treatment of Cr (VI) containing wastewater is mandatory. This study aims to optimize the removal of Cr (VI) from an aqueous solution using locally available Teff husk-activated carbon adsorbent. The laboratory-based study was conducted on the optimization of Cr (VI) removal efficiency of Teff husk-activated carbon from aqueous solution. A central composite design was used to examine the effect of the interaction of process parameters and to optimize the process using Design Expert version 7.0 software. The optimized removal efficiency of Teff husk activated carbon (95.597%) was achieved at 1.92 pH, 87.83mg/L initial concentration, 20.22g/L adsorbent dose and 2.07Hrs contact time. The adsorption of Cr (VI) on Teff husk-activated carbon was found to be best fitted with pseudo-second-order kinetics and Langmuir isotherm model of the adsorption. Teff husk-activated carbon can be used as an efficient adsorbent for the removal of chromium (VI) from contaminated water. Column adsorption needs to be studied in the future.

Keywords: batch adsorption, chromium (VI), teff husk activated carbon, response surface methodology, tannery wastewater

Procedia PDF Downloads 19
2171 Optrix: Energy Aware Cross Layer Routing Using Convex Optimization in Wireless Sensor Networks

Authors: Ali Shareef, Aliha Shareef, Yifeng Zhu

Abstract:

Energy minimization is of great importance in wireless sensor networks in extending the battery lifetime. One of the key activities of nodes in a WSN is communication and the routing of their data to a centralized base-station or sink. Routing using the shortest path to the sink is not the best solution since it will cause nodes along this path to fail prematurely. We propose a cross-layer energy efficient routing protocol Optrix that utilizes a convex formulation to maximize the lifetime of the network as a whole. We further propose, Optrix-BW, a novel convex formulation with bandwidth constraint that allows the channel conditions to be accounted for in routing. By considering this key channel parameter we demonstrate that Optrix-BW is capable of congestion control. Optrix is implemented in TinyOS, and we demonstrate that a relatively large topology of 40 nodes can converge to within 91% of the optimal routing solution. We describe the pitfalls and issues related with utilizing a continuous form technique such as convex optimization with discrete packet based communication systems as found in WSNs. We propose a routing controller mechanism that allows for this transformation. We compare Optrix against the Collection Tree Protocol (CTP) and we found that Optrix performs better in terms of convergence to an optimal routing solution, for load balancing and network lifetime maximization than CTP.

Keywords: wireless sensor network, Energy Efficient Routing

Procedia PDF Downloads 393
2170 Autonomic Sonar Sensor Fault Manager for Mobile Robots

Authors: Martin Doran, Roy Sterritt, George Wilkie

Abstract:

NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.

Keywords: autonomic, self-adaption, self-healing, self-optimization

Procedia PDF Downloads 351
2169 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax

Procedia PDF Downloads 417
2168 Wind Power Forecasting Using Echo State Networks Optimized by Big Bang-Big Crunch Algorithm

Authors: Amir Hossein Hejazi, Nima Amjady

Abstract:

In recent years, due to environmental issues traditional energy sources had been replaced by renewable ones. Wind energy as the fastest growing renewable energy shares a considerable percent of energy in power electricity markets. With this fast growth of wind energy worldwide, owners and operators of wind farms, transmission system operators, and energy traders need reliable and secure forecasts of wind energy production. In this paper, a new forecasting strategy is proposed for short-term wind power prediction based on Echo State Networks (ESN). The forecast engine utilizes state-of-the-art training process including dynamical reservoir with high capability to learn complex dynamics of wind power or wind vector signals. The study becomes more interesting by incorporating prediction of wind direction into forecast strategy. The Big Bang-Big Crunch (BB-BC) evolutionary optimization algorithm is adopted for adjusting free parameters of ESN-based forecaster. The proposed method is tested by real-world hourly data to show the efficiency of the forecasting engine for prediction of both wind vector and wind power output of aggregated wind power production.

Keywords: wind power forecasting, echo state network, big bang-big crunch, evolutionary optimization algorithm

Procedia PDF Downloads 573
2167 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 80
2166 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel

Authors: Nixon Kuruvila, H. V. Ravindra

Abstract:

Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.

Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)

Procedia PDF Downloads 411
2165 Resource Leveling Optimization in Construction Projects of High Voltage Substations Using Nature-Inspired Intelligent Evolutionary Algorithms

Authors: Dimitrios Ntardas, Alexandros Tzanetos, Georgios Dounias

Abstract:

High Voltage Substations (HVS) are the intermediate step between production of power and successfully transmitting it to clients, making them one of the most important checkpoints in power grids. Nowadays - renewable resources and consequently distributed generation are growing fast, the construction of HVS is of high importance both in terms of quality and time completion so that new energy producers can quickly and safely intergrade in power grids. The resources needed, such as machines and workers, should be carefully allocated so that the construction of a HVS is completed on time, with the lowest possible cost (e.g. not spending additional cost that were not taken into consideration, because of project delays), but in the highest quality. In addition, there are milestones and several checkpoints to be precisely achieved during construction to ensure the cost and timeline control and to ensure that the percentage of governmental funding will be granted. The management of such a demanding project is a NP-hard problem that consists of prerequisite constraints and resource limits for each task of the project. In this work, a hybrid meta-heuristic method is implemented to solve this problem. Meta-heuristics have been proven to be quite useful when dealing with high-dimensional constraint optimization problems. Hybridization of them results in boost of their performance.

Keywords: hybrid meta-heuristic methods, substation construction, resource allocation, time-cost efficiency

Procedia PDF Downloads 153
2164 Modeling and Minimizing the Effects of Ferroresonance for Medium Voltage Transformers

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Arian Amirnia, Atena Taheri, Mohammadreza Arabi, Mahmud Fotuhi-Firuzabad

Abstract:

Ferroresonance effects cause overvoltage in medium voltage transformers and isolators used in electrical networks. Ferroresonance effects are nonlinear and occur between the network capacitor and the nonlinear inductance of the voltage transformer during saturation. This phenomenon is unwanted for transformers since it causes overheating, introduction of high dynamic forces in primary coils, and rise of voltage in primary coils for the voltage transformer. Furthermore, it results in electrical and thermal failure of the transformer. Expansion of distribution lines, design of the transformer in smaller sizes, and the increase of harmonics in distribution networks result in an increase of ferroresonance. There is limited literature available to improve the effects of ferroresonance; therefore, optimizing its effects for voltage transformers is of great importance. In this study, comprehensive modeling of a medium voltage block-type voltage transformer is performed. In addition, a recent model is proposed to improve the performance of voltage transformers during the occurrence of ferroresonance using damping oscillations. Also, transformer design optimization is presented in this study to show further improvements in the performance of the voltage transformer. The recently proposed model is experimentally tested and verified on a medium voltage transformer in the laboratory, and simulation results show a large reduction of the effects of ferroresonance.

Keywords: optimization, voltage transformer, ferroresonance, modeling, damper

Procedia PDF Downloads 102
2163 Rotorcraft Performance and Environmental Impact Evaluation by Multidisciplinary Modelling

Authors: Pierre-Marie Basset, Gabriel Reboul, Binh DangVu, Sébastien Mercier

Abstract:

Rotorcraft provides invaluable services thanks to their Vertical Take-Off and Landing (VTOL), hover and low speed capabilities. Yet their use is still often limited by their cost and environmental impact, especially noise and energy consumption. One of the main brakes to the expansion of the use of rotorcraft for urban missions is the environmental impact. The first main concern for the population is the noise. In order to develop the transversal competency to assess the rotorcraft environmental footprint, a collaboration has been launched between six research departments within ONERA. The progress in terms of models and methods are capitalized into the numerical workshop C.R.E.A.T.I.O.N. “Concepts of Rotorcraft Enhanced Assessment Through Integrated Optimization Network”. A typical mission for which the environmental impact issue is of great relevance has been defined. The first milestone is to perform the pre-sizing of a reference helicopter for this mission. In a second milestone, an alternate rotorcraft concept has been defined: a tandem rotorcraft with optional propulsion. The key design trends are given for the pre-sizing of this rotorcraft aiming at a significant reduction of the global environmental impact while still giving equivalent flight performance and safety with respect to the reference helicopter. The models and methods have been improved for catching sooner and more globally, the relative variations on the environmental impact when changing the rotorcraft architecture, the pre-design variables and the operation parameters.

Keywords: environmental impact, flight performance, helicopter, multi objectives multidisciplinary optimization, rotorcraft

Procedia PDF Downloads 271
2162 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 675
2161 Finite Element Analysis of Connecting Rod

Authors: Mohammed Mohsin Ali H., Mohamed Haneef

Abstract:

The connecting rod transmits the piston load to the crank causing the latter to turn, thus converting the reciprocating motion of the piston into a rotary motion of the crankshaft. Connecting rods are subjected to forces generated by mass and fuel combustion. This study investigates and compares the fatigue behavior of forged steel, powder forged and ASTM a 514 steel cold quenched connecting rods. The objective is to suggest for a new material with reduced weight and cost with the increased fatigue life. This has entailed performing a detailed load analysis. Therefore, this study has dealt with two subjects: first, dynamic load and stress analysis of the connecting rod, and second, optimization for material, weight and cost. In the first part of the study, the loads acting on the connecting rod as a function of time were obtained. Based on the observations of the dynamic FEA, static FEA, and the load analysis results, the load for the optimization study was selected. It is the conclusion of this study that the connecting rod can be designed and optimized under a load range comprising tensile load and compressive load. Tensile load corresponds to 360o crank angle at the maximum engine speed. The compressive load is corresponding to the peak gas pressure. Furthermore, the existing connecting rod can be replaced with a new connecting rod made of ASTM a 514 steel cold quenched that is 12% lighter and 28% cheaper.

Keywords: connecting rod, ASTM a514 cold quenched material, static analysis, fatigue analysis, stress life approach

Procedia PDF Downloads 300
2160 Optimization of Culture Conditions of Paecilomyces tenuipes, Entomopathogenic Fungi Inoculated into the Silkworm Larva, Bombyx mori

Authors: Sunghee Nam

Abstract:

Entomopathogenic fungi is a Cordyceps species that is isolated from dead silkworm and cicada. Fungi on cicadas were described in old Chinese medicinal books and from ancient times, vegetable wasps and plant worms were widely known to have active substance and have been studied for pharmacological use. Among many fungi belonging to the genus Cordyceps, Cordyceps sinensis have been demonstrated to yield natural products possessing various biological activities and many bioactive components. Generally, It is commonly used to replenish the kidney and soothe the lung, and for the treatment of fatigue. Due to their commercial and economic importance, the demand for Cordyceps has been rapidly increased. However, a supply of Cordyceps specimen could not meet the increasing demand because of their sole dependence on field collection and habitat destruction. Because it is difficult to obtain many insect hosts in nature and the edibility of host insect needs to be verified in a pharmacological aspect. Recently, this setback was overcome that P. tenuipes was able to be cultivated in a large scale using silkworm as host. Pharmacological effects of P. tenuipes cultured on silkworm such as strengthening immune function, anti-fatigue, anti-tumor activity and controlling liver etc. have been proved. They are widely commercialized. In this study, we attempted to establish a method for stable growth inhibition of P. tenuipes on silkworm hosts and an optimal condition for synnemata formation. To determine optimum culturing conditions, temperature and light conditions were varied. The length and number of synnemata was highest at 25℃ temperature and 100~300 lux illumination. On an average, the synnemata of wild P. tenuipes measures 70 ㎜ in length and 20 in number; those of the cultured strain were relatively shorter and more in number. The number of synnemata may have increased as a result of inoculating the host with highly concentrated conidia, while the length may have decreased due to limited nutrition per individual. It is not able that changes in light illumination cause morphological variations in the synnemata. However, regulation of only light and temperature could not produce stromata like perithecia, asci, and ascospores.

Keywords: optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi silkworm larva, bombyx mori

Procedia PDF Downloads 253
2159 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 236
2158 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 535
2157 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)

Procedia PDF Downloads 197
2156 Condition Optimization for Trypsin and Chymotrypsin Activities in Economic Animals

Authors: Mallika Supa-Aksorn, Buaream Maneewan, Jiraporn Rojtinnakorn

Abstract:

For animals, trypsin and chymotrypsin are the 2 proteases that play the important role in protein digestion and involving in growth rate. In many animals, these two enzymes are indicated as growth parameter by feed. Although enzyme assay at optimal condition is significant for its accuracy activity determination. There is less report of trypsin and chymotrypsin. Therefore, in this study, optimization of pH and temperature for trypsin (T) and chymotrypsin (C) in economic species; i.e. Nile tilapia (Oreochromis niloticus), sand goby (Oxyeleotoris marmoratus), giant freshwater prawn (Macrobachium rosenberchii) and native chicken (Gallus gallus) were investigated. Each enzyme of each species was assaying for its specific activity with variation of pH in range of 2-12 and temperature in range of 30-80 °C. It revealed that, for Nile tilapia, T had optimal condition at pH 9 and temperature 50-80 °C, whereas C had optimal condition at pH 8 and temperature 60 °C. For sand goby, T had optimal condition at pH 7 and temperature of 50 °C, while C had optimal condition at pH 11 and temperature of 70-75 °C. For juvenile freshwater prawn, T had optimal condition at pH 10-11 and temperature of 60-65 °C, C had optimal condition at pH 8 and temperature of 70°C. For starter native chicken, T has optimal condition at pH 7 and temperature of 70 °C, whereas C had o optimal condition at pH 8 and temperature of 60°C. This information of optimal conditions will be high valuable in further for, actual enzyme measurement of T and C activities that benefit for growth and feed analysis.

Keywords: trypsin, chymotrypsin, Oreochromis niloticus, Oxyeleotoris marmoratus, Macrobachium rosenberchii, Gallus gallus

Procedia PDF Downloads 259
2155 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 129
2154 Solar Building Design Using GaAs PV Cells for Optimum Energy Consumption

Authors: Hadis Pouyafar, D. Matin Alaghmandan

Abstract:

Gallium arsenide (GaAs) solar cells are widely used in applications like spacecraft and satellites because they have a high absorption coefficient and efficiency and can withstand high-energy particles such as electrons and protons. With the energy crisis, there's a growing need for efficiency and cost-effective solar cells. GaAs cells, with their 46% efficiency compared to silicon cells 23% can be utilized in buildings to achieve nearly zero emissions. This way, we can use irradiation and convert more solar energy into electricity. III V semiconductors used in these cells offer performance compared to other technologies available. However, despite these advantages, Si cells dominate the market due to their prices. In our study, we took an approach by using software from the start to gather all information. By doing so, we aimed to design the optimal building that harnesses the full potential of solar energy. Our modeling results reveal a future; for GaAs cells, we utilized the Grasshopper plugin for modeling and optimization purposes. To assess radiation, weather data, solar energy levels and other factors, we relied on the Ladybug and Honeybee plugins. We have shown that silicon solar cells may not always be the choice for meeting electricity demands, particularly when higher power output is required. Therefore, when it comes to power consumption and the available surface area for photovoltaic (PV) installation, it may be necessary to consider efficient solar cell options, like GaAs solar cells. By considering the building requirements and utilizing GaAs technology, we were able to optimize the PV surface area.

Keywords: gallium arsenide (GaAs), optimization, sustainable building, GaAs solar cells

Procedia PDF Downloads 96
2153 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground

Authors: Bhim Kumar Dahal

Abstract:

Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies.  Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication.  And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.

Keywords: cement, improvement, physical properties, strength

Procedia PDF Downloads 176
2152 Predictive Maintenance of Industrial Shredders: Efficient Operation through Real-Time Monitoring Using Statistical Machine Learning

Authors: Federico Pittino, Thomas Arnold

Abstract:

The shredding of waste materials is a key step in the recycling process towards the circular economy. Industrial shredders for waste processing operate in very harsh operating conditions, leading to the need for frequent maintenance of critical components. Maintenance optimization is particularly important also to increase the machine’s efficiency, thereby reducing the operational costs. In this work, a monitoring system has been developed and deployed on an industrial shredder located at a waste recycling plant in Austria. The machine has been monitored for one year, and methods for predictive maintenance have been developed for two key components: the cutting knives and the drive belt. The large amount of collected data is leveraged by statistical machine learning techniques, thereby not requiring very detailed knowledge of the machine or its live operating conditions. The results show that, despite the wide range of operating conditions, a reliable estimate of the optimal time for maintenance can be derived. Moreover, the trade-off between the cost of maintenance and the increase in power consumption due to the wear state of the monitored components of the machine is investigated. This work proves the benefits of real-time monitoring system for the efficient operation of industrial shredders.

Keywords: predictive maintenance, circular economy, industrial shredder, cost optimization, statistical machine learning

Procedia PDF Downloads 125
2151 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 436
2150 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 447
2149 Significant Reduction in Specific CO₂ Emission through Process Optimization at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, M. K. G. Choudhury, Santanu Mallick, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

One of the key corporate goals of Tata Steel company is to demonstrate Environment Leadership. Decreasing specific CO₂ emission is one of the key steps to achieve the stated corporate goal. At any Blast Furnace, specific CO₂ emission is directly proportional to fuel intake. To reduce the fuel intake at G Blast Furnace, an initial benchmarking exercise was carried out with international and domestic Blast Furnaces to determine the potential for improvement. The gap identified during the exercise revealed that the benchmark Blast Furnaces operated with superior raw material quality than that in G Blast Furnace. However, since the raw materials to G Blast Furnace are sourced from the captive mines, improvement in the raw material quality was out of scope. Therefore, trials were taken with different operating regimes, to identify the key process parameters, which on optimization could significantly reduce the fuel intake in G Blast Furnace. The key process parameters identified from the trial were the Stoichiometric Oxygen Ratio, Melting Capacity ratio and the burden distribution inside the furnace. These identified process parameters were optimized to bridge the gap in fuel intake at G Blast Furnace, thereby reducing specific CO₂ emission to benchmark levels. This paradigm shift enabled to lower the fuel intake by 70kg per ton of liquid iron produced, thereby reducing the specific CO₂ emission by 15 percent.

Keywords: benchmark, blast furnace, CO₂ emission, fuel rate

Procedia PDF Downloads 281
2148 Electricity Sector's Status in Lebanon and Portfolio Optimization for the Future Electricity Generation Scenarios

Authors: Nour Wehbe

Abstract:

The Lebanese electricity sector is at the heart of a deep crisis. Electricity in Lebanon is supplied by Électricité du Liban (EdL) which has to suffer from technical and financial deficiencies for decades and proved to be insufficient and deficient as the demand still exceeds the supply. As a result, backup generation is widespread throughout Lebanon. The sector costs massive government resources and, on top of it, consumers pay massive additional amounts for satisfying their electrical needs. While the developed countries have been investing in renewable energy for the past two decades, the Lebanese government realizes the importance of adopting such energy sourcing strategies for the upgrade of the electricity sector in the country. The diversification of the national electricity generation mix has increased considerably in Lebanon's energy planning agenda, especially that a detailed review of the energy potential in Lebanon has revealed a great potential of solar and wind energy resources, a considerable potential of biomass resource, and an important hydraulic potential in Lebanon. This paper presents a review of the energy status of Lebanon, and illustrates a detailed review of the EDL structure with the existing problems and recommended solutions. In addition, scenarios reflecting implementation of policy projects are presented, and conclusions are drawn on the usefulness of a proposed evaluation methodology and the effectiveness of the adopted new energy policy for the electrical sector in Lebanon.

Keywords: EdL Electricite du Liban, portfolio optimization, electricity generation mix, mean-variance approach

Procedia PDF Downloads 248
2147 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique

Authors: N. Guo, C. Xu, Z. C. Yang

Abstract:

In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.

Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search

Procedia PDF Downloads 161
2146 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 61
2145 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features

Authors: Rabab M. Ramadan, Elaraby A. Elgallad

Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)

Procedia PDF Downloads 235