Search results for: Monte Carlo algorithms
2017 Improve Closed Loop Performance and Control Signal Using Evolutionary Algorithms Based PID Controller
Authors: Mehdi Shahbazian, Alireza Aarabi, Mohsen Hadiyan
Abstract:
Proportional-Integral-Derivative (PID) controllers are the most widely used controllers in industry because of its simplicity and robustness. Different values of PID parameters make different step response, so an increasing amount of literature is devoted to proper tuning of PID controllers. The problem merits further investigation as traditional tuning methods make large control signal that can damages the system but using evolutionary algorithms based tuning methods improve the control signal and closed loop performance. In this paper three tuning methods for PID controllers have been studied namely Ziegler and Nichols, which is traditional tuning method and evolutionary algorithms based tuning methods, that are, Genetic algorithm and particle swarm optimization. To examine the validity of PSO and GA tuning methods a comparative analysis of DC motor plant is studied. Simulation results reveal that evolutionary algorithms based tuning method have improved control signal amplitude and quality factors of the closed loop system such as rise time, integral absolute error (IAE) and maximum overshoot.Keywords: evolutionary algorithm, genetic algorithm, particle swarm optimization, PID controller
Procedia PDF Downloads 4842016 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator
Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard
Abstract:
Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.Keywords: blade tip timing, blisk, finite element, vibration measurement
Procedia PDF Downloads 3122015 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine
Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy
Abstract:
Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.Keywords: land cover, google earth engine, machine learning, remote sensing
Procedia PDF Downloads 1132014 Automatic Queuing Model Applications
Authors: Fahad Suleiman
Abstract:
Queuing, in medical system is the process of moving patients in a specific sequence to a specific service according to the patients’ nature of illness. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the medical consultancy system, the different queuing algorithms that are used in healthcare system to serve the patients, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the medical queuing system that can analyses the queue status and take decision which patient to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.Keywords: queuing systems, queuing system models, scheduling algorithms, patients
Procedia PDF Downloads 3542013 Improved Multi–Objective Firefly Algorithms to Find Optimal Golomb Ruler Sequences for Optimal Golomb Ruler Channel Allocation
Authors: Shonak Bansal, Prince Jain, Arun Kumar Singh, Neena Gupta
Abstract:
Recently nature–inspired algorithms have widespread use throughout the tough and time consuming multi–objective scientific and engineering design optimization problems. In this paper, we present extended forms of firefly algorithm to find optimal Golomb ruler (OGR) sequences. The OGRs have their one of the major application as unequally spaced channel–allocation algorithm in optical wavelength division multiplexing (WDM) systems in order to minimize the adverse four–wave mixing (FWM) crosstalk effect. The simulation results conclude that the proposed optimization algorithm has superior performance compared to the existing conventional computing and nature–inspired optimization algorithms to find OGRs in terms of ruler length, total optical channel bandwidth and computation time.Keywords: channel allocation, conventional computing, four–wave mixing, nature–inspired algorithm, optimal Golomb ruler, lévy flight distribution, optimization, improved multi–objective firefly algorithms, Pareto optimal
Procedia PDF Downloads 3222012 A Metaheuristic Approach for Optimizing Perishable Goods Distribution
Authors: Bahare Askarian, Suchithra Rajendran
Abstract:
Maintaining the freshness and quality of perishable goods during distribution is a critical challenge for logistics companies. This study presents a comprehensive framework aimed at optimizing the distribution of perishable goods through a mathematical model of the Transportation Inventory Location Routing Problem (TILRP). The model incorporates the impact of product age on customer demand, addressing the complexities associated with inventory management and routing. To tackle this problem, we develop both simple and hybrid metaheuristic algorithms designed for small- and medium-scale scenarios. The hybrid algorithm combines Biogeographical Based Optimization (BBO) algorithms with local search techniques to enhance performance in small- and medium-scale scenarios, extending our approach to larger-scale challenges. Through extensive numerical simulations and sensitivity analyses across various scenarios, the performance of the proposed algorithms is evaluated, assessing their effectiveness in achieving optimal solutions. The results demonstrate that our algorithms significantly enhance distribution efficiency, offering valuable insights for logistics companies striving to improve their perishable goods supply chains.Keywords: perishable goods, meta-heuristic algorithm, vehicle problem, inventory models
Procedia PDF Downloads 232011 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier
Authors: Akhilesh G. Naik, Dipankar Pal
Abstract:
In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.Keywords: Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba (SSK), Looped Karatsuba (LK)
Procedia PDF Downloads 1692010 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem
Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee
Abstract:
Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research
Procedia PDF Downloads 3362009 Calculation of Secondary Neutron Dose Equivalent in Proton Therapy of Thyroid Gland Using FLUKA Code
Authors: M. R. Akbari, M. Sadeghi, R. Faghihi, M. A. Mosleh-Shirazi, A. R. Khorrami-Moghadam
Abstract:
Proton radiotherapy (PRT) is becoming an established treatment modality for cancer. The localized tumors, the same as undifferentiated thyroid tumors are insufficiently handled by conventional radiotherapy, while protons would propose the prospect of increasing the tumor dose without exceeding the tolerance of the surrounding healthy tissues. In spite of relatively high advantages in giving localized radiation dose to the tumor region, in proton therapy, secondary neutron production can have significant contribution on integral dose and lessen advantages of this modality contrast to conventional radiotherapy techniques. Furthermore, neutrons have high quality factor, therefore, even a small physical dose can cause considerable biological effects. Measuring of this neutron dose is a very critical step in prediction of secondary cancer incidence. It has been found that FLUKA Monte Carlo code simulations have been used to evaluate dose due to secondaries in proton therapy. In this study, first, by validating simulated proton beam range in water phantom with CSDA range from NIST for the studied proton energy range (34-54 MeV), a proton therapy in thyroid gland cancer was simulated using FLUKA code. Secondary neutron dose equivalent of some organs and tissues after the target volume caused by 34 and 54 MeV proton interactions were calculated in order to evaluate secondary cancer incidence. A multilayer cylindrical neck phantom considering all the layers of neck tissues and a proton beam impinging normally on the phantom were also simulated. Trachea (accompanied by Larynx) had the greatest dose equivalent (1.24×10-1 and 1.45 pSv per primary 34 and 54 MeV protons, respectively) among the simulated tissues after the target volume in the neck region.Keywords: FLUKA code, neutron dose equivalent, proton therapy, thyroid gland
Procedia PDF Downloads 4252008 A Data-Driven Agent Based Model for the Italian Economy
Authors: Michele Catalano, Jacopo Di Domenico, Luca Riccetti, Andrea Teglio
Abstract:
We develop a data-driven agent based model (ABM) for the Italian economy. We calibrate the model for the initial condition and parameters. As a preliminary step, we replicate the Monte-Carlo simulation for the Austrian economy. Then, we evaluate the dynamic properties of the model: the long-run equilibrium and the allocative efficiency in terms of disequilibrium patterns arising in the search and matching process for final goods, capital, intermediate goods, and credit markets. In this perspective, we use a randomized initial condition approach. We perform a robustness analysis perturbing the system for different parameter setups. We explore the empirical properties of the model using a rolling window forecast exercise from 2010 to 2022 to observe the model’s forecasting ability in the wake of the COVID-19 pandemic. We perform an analysis of the properties of the model with a different number of agents, that is, with different scales of the model compared to the real economy. The model generally displays transient dynamics that properly fit macroeconomic data regarding forecasting ability. We stress the model with a large set of shocks, namely interest policy, fiscal policy, and exogenous factors, such as external foreign demand for export. In this way, we can explore the most exposed sectors of the economy. Finally, we modify the technology mix of the various sectors and, consequently, the underlying input-output sectoral interdependence to stress the economy and observe the long-run projections. In this way, we can include in the model the generation of endogenous crisis due to the implied structural change, technological unemployment, and potential lack of aggregate demand creating the condition for cyclical endogenous crises reproduced in this artificial economy.Keywords: agent-based models, behavioral macro, macroeconomic forecasting, micro data
Procedia PDF Downloads 702007 Image Segmentation Techniques: Review
Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo
Abstract:
Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.Keywords: clustering-based, convolution-network, edge-based, region-growing
Procedia PDF Downloads 982006 THz Phase Extraction Algorithms for a THz Modulating Interferometric Doppler Radar
Authors: Shaolin Allen Liao, Hual-Te Chien
Abstract:
Various THz phase extraction algorithms have been developed for a novel THz Modulating Interferometric Doppler Radar (THz-MIDR) developed recently by the author. The THz-MIDR differs from the well-known FTIR technique in that it introduces a continuously modulating reference branch, compared to the time-consuming discrete FTIR stepping reference branch. Such change allows real-time tracking of a moving object and capturing of its Doppler signature. The working principle of the THz-MIDR is similar to the FTIR technique: the incoming THz emission from the scene is split by a beam splitter/combiner; one of the beams is continuously modulated by a vibrating mirror or phase modulator and the other split beam is reflected by a reflection mirror; finally both the modulated reference beam and reflected beam are combined by the same beam splitter/combiner and detected by a THz intensity detector (for example, a pyroelectric detector). In order to extract THz phase from the single intensity measurement signal, we have derived rigorous mathematical formulas for 3 Frequency Banded (FB) signals: 1) DC Low-Frequency Banded (LFB) signal; 2) Fundamental Frequency Banded (FFB) signal; and 3) Harmonic Frequency Banded (HFB) signal. The THz phase extraction algorithms are then developed based combinations of 2 or all of these 3 FB signals with efficient algorithms such as Levenberg-Marquardt nonlinear fitting algorithm. Numerical simulation has also been performed in Matlab with simulated THz-MIDR interferometric signal of various Signal to Noise Ratio (SNR) to verify the algorithms.Keywords: algorithm, modulation, THz phase, THz interferometry doppler radar
Procedia PDF Downloads 3462005 Ground Surface Temperature History Prediction Using Long-Short Term Memory Neural Network Architecture
Authors: Venkat S. Somayajula
Abstract:
Ground surface temperature history prediction model plays a vital role in determining standards for international nuclear waste management. International standards for borehole based nuclear waste disposal require paleoclimate cycle predictions on scale of a million forward years for the place of waste disposal. This research focuses on developing a paleoclimate cycle prediction model using Bayesian long-short term memory (LSTM) neural architecture operated on accumulated borehole temperature history data. Bayesian models have been previously used for paleoclimate cycle prediction based on Monte-Carlo weight method, but due to limitations pertaining model coupling with certain other prediction networks, Bayesian models in past couldn’t accommodate prediction cycle’s over 1000 years. LSTM has provided frontier to couple developed models with other prediction networks with ease. Paleoclimate cycle developed using this process will be trained on existing borehole data and then will be coupled to surface temperature history prediction networks which give endpoints for backpropagation of LSTM network and optimize the cycle of prediction for larger prediction time scales. Trained LSTM will be tested on past data for validation and then propagated for forward prediction of temperatures at borehole locations. This research will be beneficial for study pertaining to nuclear waste management, anthropological cycle predictions and geophysical featuresKeywords: Bayesian long-short term memory neural network, borehole temperature, ground surface temperature history, paleoclimate cycle
Procedia PDF Downloads 1302004 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 3552003 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data
Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan
Abstract:
Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data
Procedia PDF Downloads 4432002 Spatio-Temporal Analysis of Rabies Incidence in Herbivores of Economic Interest in Brazil
Authors: Francisco Miroslav Ulloa-Stanojlovic, Gina Polo, Ricardo Augusto Dias
Abstract:
In Brazil, there is a high incidence of rabies in herbivores of economic interest (HEI) transmitted by the common vampire bat Desmodus rotundus, the presence of human rabies cases and the huge economic losses in the world's largest cattle industry, it is important to assist the National Program for Control of Rabies in herbivores in Brazil, that aims to reduce the incidence of rabies in HEI populations, mainly through epidemiological surveillance, vaccination of herbivores and control of vampire-bat roosts. Material and Methods: A spatiotemporal retrospective Kulldorff's spatial scan statistic based on a Poisson model and Monte Carlo simulation and an Anselin's Local Moran's I statistic were used to uncover spatial clustering of HEI rabies from 2000 – 2014. Results: Were identify three important clusters with significant year-to-year variation (Figure 1). In 2000, was identified one area of clustering in the North region, specifically in the State of Tocantins. Between the year 2000 and 2004, a cluster centered in the Midwest and Southeast region including the States of Goiás, Minas Gerais, Rio de Janeiro, Espirito Santo and São Paulo was prominent. And finally between 2000 and 2005 was found an important cluster in the North, Midwest and South region. Conclusions: The HEI rabies is endemic in the country, in addition, appears to be significant differences among the States according to their surveillance services, that may be difficulting the control of the disease, also other factors could be influencing in the maintenance of this problem like the lack of information of vampire-bat roosts identification, and limited human resources for realization of field monitoring. A review of the program control by the authorities it’s necessary.Keywords: Brazil, Desmodus rotundus, herbivores, rabies
Procedia PDF Downloads 4192001 Control of a Stewart Platform for Minimizing Impact Energy in Simulating Spacecraft Docking Operations
Authors: Leonardo Herrera, Shield B. Lin, Stephen J. Montgomery-Smith, Ziraguen O. Williams
Abstract:
Three control algorithms: Proportional-Integral-Derivative, Linear-Quadratic-Gaussian, and Linear-Quadratic-Gaussian with the shift, were applied to the computer simulation of a one-directional dynamic model of a Stewart Platform. The goal was to compare the dynamic system responses under the three control algorithms and to minimize the impact energy when simulating spacecraft docking operations. Equations were derived for the control algorithms and the input and output of the feedback control system. Using MATLAB, Simulink diagrams were created to represent the three control schemes. A switch selector was used for the convenience of changing among different controllers. The simulation demonstrated the controller using the algorithm of Linear-Quadratic-Gaussian with the shift resulting in the lowest impact energy.Keywords: controller, Stewart platform, docking operation, spacecraft
Procedia PDF Downloads 532000 Approach Based on Fuzzy C-Means for Band Selection in Hyperspectral Images
Authors: Diego Saqui, José H. Saito, José R. Campos, Lúcio A. de C. Jorge
Abstract:
Hyperspectral images and remote sensing are important for many applications. A problem in the use of these images is the high volume of data to be processed, stored and transferred. Dimensionality reduction techniques can be used to reduce the volume of data. In this paper, an approach to band selection based on clustering algorithms is presented. This approach allows to reduce the volume of data. The proposed structure is based on Fuzzy C-Means (or K-Means) and NWHFC algorithms. New attributes in relation to other studies in the literature, such as kurtosis and low correlation, are also considered. A comparison of the results of the approach using the Fuzzy C-Means and K-Means with different attributes is performed. The use of both algorithms show similar good results but, particularly when used attributes variance and kurtosis in the clustering process, however applicable in hyperspectral images.Keywords: band selection, fuzzy c-means, k-means, hyperspectral image
Procedia PDF Downloads 4091999 A Hybrid Distributed Algorithm for Multi-Objective Dynamic Flexible Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a hybrid distributed algorithm has been suggested for multi-objective dynamic flexible job shop scheduling problem. The proposed algorithm is high level, in which several algorithms search the space on different machines simultaneously also it is a hybrid algorithm that takes advantages of the artificial intelligence, evolutionary and optimization methods. Distribution is done at different levels and new approaches are used for design of the algorithm. Apache spark and Hadoop frameworks have been used for the distribution of the algorithm. The Pareto optimality approach is used for solving the multi-objective benchmarks. The suggested algorithm that is able to solve large-size problems in short times has been compared with the successful algorithms of the literature. The results prove high speed and efficiency of the algorithm.Keywords: distributed algorithms, apache-spark, Hadoop, flexible dynamic job shop scheduling, multi-objective optimization
Procedia PDF Downloads 3541998 Economic Assessment of the Fish Solar Tent Dryers
Authors: Collen Kawiya
Abstract:
In an effort of reducing post-harvest losses and improving the supply of quality fish products in Malawi, the fish solar tent dryers have been designed in the southern part of Lake Malawi for processing small fish species under the project of Cultivate Africa’s Future (CultiAF). This study was done to promote the adoption of the fish solar tent dryers by the many small scale fish processors in Malawi through the assessment of the economic viability of these dryers. With the use of the project’s baseline survey data, a business model for a constructed ‘ready for use’ solar tent dryer was developed where investment appraisal techniques were calculated in addition with the sensitivity analysis. The study also conducted a risk analysis through the use of the Monte Carlo simulation technique and a probabilistic net present value was found. The investment appraisal results showed that the net present value was US$8,756.85, the internal rate of return was 62% higher than the 16.32% cost of capital and the payback period was 1.64 years. The sensitivity analysis results showed that only two input variables influenced the fish solar dryer investment’s net present value. These are the dried fish selling prices that were correlating positively with the net present value and the fresh fish buying prices that were negatively correlating with the net present value. Risk analysis results showed that the chances that fish processors will make a loss from this type of investment are 17.56%. It was also observed that there exist only a 0.20 probability of experiencing a negative net present value from this type of investment. Lastly, the study found that the net present value of the fish solar tent dryer’s investment is still robust in spite of any changes in the levels of investors risk preferences. With these results, it is concluded that the fish solar tent dryers in Malawi are an economically viable investment because they are able to improve the returns in the fish processing activity. As such, fish processors need to adopt them by investing their money to construct and use them.Keywords: investment appraisal, risk analysis, sensitivity analysis, solar tent drying
Procedia PDF Downloads 2801997 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells
Authors: Ali Akbar, Seungho Shin, Sukkee Um
Abstract:
The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling
Procedia PDF Downloads 1211996 Reliability Assessment and Failure Detection in a Complex Human-Machine System Using Agent-Based and Human Decision-Making Modeling
Authors: Sanjal Gavande, Thomas Mazzuchi, Shahram Sarkani
Abstract:
In a complex aerospace operational environment, identifying failures in a procedure involving multiple human-machine interactions are difficult. These failures could lead to accidents causing loss of hardware or human life. The likelihood of failure further increases if operational procedures are tested for a novel system with multiple human-machine interfaces and with no prior performance data. The existing approach in the literature of reviewing complex operational tasks in a flowchart or tabular form doesn’t provide any insight into potential system failures due to human decision-making ability. To address these challenges, this research explores an agent-based simulation approach for reliability assessment and fault detection in complex human-machine systems while utilizing a human decision-making model. The simulation will predict the emergent behavior of the system due to the interaction between humans and their decision-making capability with the varying states of the machine and vice-versa. Overall system reliability will be evaluated based on a defined set of success-criteria conditions and the number of recorded failures over an assigned limit of Monte Carlo runs. The study also aims at identifying high-likelihood failure locations for the system. The research concludes that system reliability and failures can be effectively calculated when individual human and machine agent states are clearly defined. This research is limited to the operations phase of a system lifecycle process in an aerospace environment only. Further exploration of the proposed agent-based and human decision-making model will be required to allow for a greater understanding of this topic for application outside of the operations domain.Keywords: agent-based model, complex human-machine system, human decision-making model, system reliability assessment
Procedia PDF Downloads 1691995 Enhancing Precision Agriculture through Object Detection Algorithms: A Study of YOLOv5 and YOLOv8 in Detecting Armillaria spp.
Authors: Christos Chaschatzis, Chrysoula Karaiskou, Pantelis Angelidis, Sotirios K. Goudos, Igor Kotsiuba, Panagiotis Sarigiannidis
Abstract:
Over the past few decades, the rapid growth of the global population has led to the need to increase agricultural production and improve the quality of agricultural goods. There is a growing focus on environmentally eco-friendly solutions, sustainable production, and biologically minimally fertilized products in contemporary society. Precision agriculture has the potential to incorporate a wide range of innovative solutions with the development of machine learning algorithms. YOLOv5 and YOLOv8 are two of the most advanced object detection algorithms capable of accurately recognizing objects in real time. Detecting tree diseases is crucial for improving the food production rate and ensuring sustainability. This research aims to evaluate the efficacy of YOLOv5 and YOLOv8 in detecting the symptoms of Armillaria spp. in sweet cherry trees and determining their health status, with the goal of enhancing the robustness of precision agriculture. Additionally, this study will explore Computer Vision (CV) techniques with machine learning algorithms to improve the detection process’s efficiency.Keywords: Armillaria spp., machine learning, precision agriculture, smart farming, sweet cherries trees, YOLOv5, YOLOv8
Procedia PDF Downloads 1151994 Discriminant Analysis as a Function of Predictive Learning to Select Evolutionary Algorithms in Intelligent Transportation System
Authors: Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra, Alejandro Fuentes-Penna, Daniel Vélez-Díaz, Edith Olaco García
Abstract:
In this paper, we present the use of the discriminant analysis to select evolutionary algorithms that better solve instances of the vehicle routing problem with time windows. We use indicators as independent variables to obtain the classification criteria, and the best algorithm from the generic genetic algorithm (GA), random search (RS), steady-state genetic algorithm (SSGA), and sexual genetic algorithm (SXGA) as the dependent variable for the classification. The discriminant classification was trained with classic instances of the vehicle routing problem with time windows obtained from the Solomon benchmark. We obtained a classification of the discriminant analysis of 66.7%.Keywords: Intelligent Transportation Systems, data-mining techniques, evolutionary algorithms, discriminant analysis, machine learning
Procedia PDF Downloads 4721993 Diffusion Adaptation Strategies for Distributed Estimation Based on the Family of Affine Projection Algorithms
Authors: Mohammad Shams Esfand Abadi, Mohammad Ranjbar, Reza Ebrahimpour
Abstract:
This work presents the distributed processing solution problem in a diffusion network based on the adapt then combine (ATC) and combine then adapt (CTA)selective partial update normalized least mean squares (SPU-NLMS) algorithms. Also, we extend this approach to dynamic selection affine projection algorithm (DS-APA) and ATC-DS-APA and CTA-DS-APA are established. The purpose of ATC-SPU-NLMS and CTA-SPU-NLMS algorithm is to reduce the computational complexity by updating the selected blocks of weight coefficients at every iteration. In CTA-DS-APA and ATC-DS-APA, the number of the input vectors is selected dynamically. Diffusion cooperation strategies have been shown to provide good performance based on these algorithms. The good performance of introduced algorithm is illustrated with various experimental results.Keywords: selective partial update, affine projection, dynamic selection, diffusion, adaptive distributed networks
Procedia PDF Downloads 7081992 Algorithms used in Spatial Data Mining GIS
Authors: Vahid Bairami Rad
Abstract:
Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining
Procedia PDF Downloads 4621991 Prophylactic Replacement of Voice Prosthesis: A Study to Predict Prosthesis Lifetime
Authors: Anne Heirman, Vincent van der Noort, Rob van Son, Marije Petersen, Lisette van der Molen, Gyorgy Halmos, Richard Dirven, Michiel van den Brekel
Abstract:
Objective: Voice prosthesis leakage significantly impacts laryngectomies patients' quality of life, causing insecurity and frequent unplanned hospital visits and costs. In this study, the concept of prophylactic voice prosthesis replacement was explored to prevent leakages. Study Design: A retrospective cohort study. Setting: Tertiary hospital. Methods: Device lifetimes and voice prosthesis replacements of a retrospective cohort, including all patients with laryngectomies between 2000 and 2012 in the Netherlands Cancer Institute, were used to calculate the number of needed voice prostheses per patient per year when preventing 70% of the leakages by prophylactic replacement. Various strategies for the timing of prophylactic replacement were considered: Adaptive strategies based on the individual patient’s history of replacement and fixed strategies based on the results of patients with similar voice prosthesis or treatment characteristics. Results: Patients used a median of 3.4 voice prostheses per year (range 0.1-48.1). We found a high inter-and intrapatient variability in device lifetime. When applying prophylactic replacement, this would become a median of 9.4 voice prostheses per year, which means replacement every 38 days, implying more than six additional voice prostheses per patient per year. The individual adaptive model showed that preventing 70% of the leakages was impossible for most patients, and only a median of 25% can be prevented. Monte-Carlo simulations showed that prophylactic replacement is not feasible due to the high Coefficient of Variation (Standard Deviation/Mean) in device lifetime. Conclusion: Based on our simulations, prophylactic replacement of voice prostheses is not feasible due to high inter-and intrapatient variation in device lifetime.Keywords: voice prosthesis, voice rehabilitation, total laryngectomy, prosthetic leakage, device lifetime
Procedia PDF Downloads 1311990 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation
Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell
Abstract:
Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models
Procedia PDF Downloads 1461989 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory
Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang
Abstract:
To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume
Procedia PDF Downloads 1421988 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms
Authors: Farhat Imtiaz, Umar Farooq
Abstract:
In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation
Procedia PDF Downloads 135