Search results for: grasshopper optimization algorithm
1745 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection
Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten
Abstract:
Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection
Procedia PDF Downloads 3361744 Preventing the Drought of Lakes by Using Deep Reinforcement Learning in France
Authors: Farzaneh Sarbandi Farahani
Abstract:
Drought and decrease in the level of lakes in recent years due to global warming and excessive use of water resources feeding lakes are of great importance, and this research has provided a structure to investigate this issue. First, the information required for simulating lake drought is provided with strong references and necessary assumptions. Entity-Component-System (ECS) structure has been used for simulation, which can consider assumptions flexibly in simulation. Three major users (i.e., Industry, agriculture, and Domestic users) consume water from groundwater and surface water (i.e., streams, rivers and lakes). Lake Mead has been considered for simulation, and the information necessary to investigate its drought has also been provided. The results are presented in the form of a scenario-based design and optimal strategy selection. For optimal strategy selection, a deep reinforcement algorithm is developed to select the best set of strategies among all possible projects. These results can provide a better view of how to plan to prevent lake drought.Keywords: drought simulation, Mead lake, entity component system programming, deep reinforcement learning
Procedia PDF Downloads 911743 Classification of Echo Signals Based on Deep Learning
Authors: Aisulu Tileukulova, Zhexebay Dauren
Abstract:
Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.Keywords: radar, neural network, convolutional neural network, echo signals
Procedia PDF Downloads 3531742 Flywheel Energy Storage Control Using SVPWM for Small Satellites Application
Authors: Noha El-Gohary, Thanaa El-Shater, A. A. Mahfouz, M. M. Sakr
Abstract:
Searching for high power conversion efficiency and long lifetime are important goals when designing a power supply subsystem for satellite applications. To fulfill these goals, this paper presents a power supply subsystem for small satellites in which flywheel energy storage system is used as a secondary power source instead of chemical battery. In this paper, the model of flywheel energy storage system is introduced; a DC bus regulation control algorithm for charging and discharging of flywheel based on space vector pulse width modulation technique and motor current control is also introduced. Simulation results showed the operation of the flywheel for charging and discharging mode during illumination and shadowed period. The advantages of the proposed system are confirmed by the simulation results of the power supply system.Keywords: small-satellites, flywheel energy storage system, space vector pulse width modulation, power conversion
Procedia PDF Downloads 4001741 Optimal Maintenance Policy for a Partially Observable Two-Unit System
Authors: Leila Jafari, Viliam Makis, G. B. Akram Khaleghei
Abstract:
In this paper, we present a maintenance model of a two-unit series system with economic dependence. Unit#1, which is considered to be more expensive and more important, is subject to condition monitoring (CM) at equidistant, discrete time epochs and unit#2, which is not subject to CM, has a general lifetime distribution. The multivariate observation vectors obtained through condition monitoring carry partial information about the hidden state of unit#1, which can be in a healthy or a warning state while operating. Only the failure state is assumed to be observable for both units. The objective is to find an optimal opportunistic maintenance policy minimizing the long-run expected average cost per unit time. The problem is formulated and solved in the partially observable semi-Markov decision process framework. An effective computational algorithm for finding the optimal policy and the minimum average cost is developed and illustrated by a numerical example.Keywords: condition-based maintenance, semi-Markov decision process, multivariate Bayesian control chart, partially observable system, two-unit system
Procedia PDF Downloads 4601740 Phosphorus Recovery Optimization in Microbial Fuel Cell
Authors: Abdullah Almatouq
Abstract:
Understanding the impact of key operational variables on concurrent energy generation and phosphorus recovery in microbial fuel cell is required to improve the process and reduce the operational cost. In this study, full factorial design (FFD) and central composite designs (CCD) were employed to identify the effect of influent COD concentration and cathode aeration flow rate on energy generation and phosphorus (P) recovery and to optimise MFC power density and P recovery. Results showed that influent chemical oxygen demand (COD) concentration and cathode aeration flow rate had a significant effect on power density, coulombic efficiency, phosphorus precipitation efficiency and phosphorus precipitation rate at the cathode. P precipitation was negatively affected by the generated current during the batch duration. The generated energy was reduced due to struvite being precipitated on the cathode surface, which might obstruct the mass transfer of ions and oxygen. Response surface mathematical model was used to predict the optimum operating conditions that resulted in a maximum power density and phosphorus precipitation efficiency of 184 mW/m² and 84%, and this corresponds to COD= 1700 mg/L and aeration flow rate=210 mL/min. The findings highlight the importance of the operational conditions of energy generation and phosphorus recovery.Keywords: energy, microbial fuel cell, phosphorus, struvite
Procedia PDF Downloads 1571739 Analytical Solutions for Tunnel Collapse Mechanisms in Circular Cross-Section Tunnels under Seepage and Seismic Forces
Authors: Zhenyu Yang, Qiunan Chen, Xiaocheng Huang
Abstract:
Reliable prediction of tunnel collapse remains a prominent challenge in the field of civil engineering. In this study, leveraging the nonlinear Hoek-Brown failure criterion and the upper-bound theorem, an analytical solution for the collapse surface of shallowly buried circular tunnels was derived, taking into account the coupled effects of surface loads and pore water pressures. Initially, surface loads and pore water pressures were introduced as external force factors, equating the energy dissipation rate to the external force, yielding our objective function. Subsequently, the variational method was employed for optimization, and the outcomes were juxtaposed with previous research findings. Furthermore, we utilized the deduced equation set to systematically analyze the influence of various rock mass parameters on collapse shape and extent. To validate our analytical solutions, a comparison with prior studies was executed. The corroboration underscored the efficacy of our proposed methodology, offering invaluable insights for collapse risk assessment in practical engineering applications.Keywords: tunnel roof stability, analytical solution, hoek–brown failure criterion, limit analysis
Procedia PDF Downloads 841738 Numerical Analysis of Crack's Effects in a Dissimilar Welded Joint
Authors: Daniel N. L. Alves, Marcelo C. Rodrigues, Jose G. de Almeida
Abstract:
The search for structural efficiency in mechanical systems has been strongly exerted with aim of economic optimization and structural safety. As soon, to understand the response of materials when submitted to adverse conditions is essential to design a safety project. This work investigates the presence of cracks in dissimilar welded joints (DWJ). Its fracture toughness responses depend upon the heterogeneity present in these joints. Thus, this work aim analyzing the behavior of the crack tip zone located in a buttery dissimilar welded joint (ASTM A-36, Inconel, and AISI 8630 M) used in the union of pipes present in the offshore oil production lines. The crack was placed 1 mm from fusion line (FL) Inconel-AISI 8630 M toward the AISI 8630 M. Finite Element Method (FEM) was used to analyze stress and strain fields generated during the loading imposed on the specimen. It was possible observing critical stress area by the numerical tool as well as a preferential plastic flow was also observed in the sample of dissimilar welded joint, which can be considered a harbinger of the crack growth path. The results obtained through numerical analysis showed a convergent behavior in relation to the plastic flow, qualitatively and quantitatively, in agreement with previous performed.Keywords: crack, dissimilar welded joint, numerical analysis, strain field, the stress field
Procedia PDF Downloads 1711737 Detect Cable Force of Cable Stayed Bridge from Accelerometer Data of SHM as Real Time
Authors: Nguyen Lan, Le Tan Kien, Nguyen Pham Gia Bao
Abstract:
The cable-stayed bridge belongs to the combined system, in which the cables is a major strutual element. Cable-stayed bridges with large spans are often arranged with structural health monitoring systems to collect data for bridge health diagnosis. Cables tension monitoring is a structural monitoring content. It is common to measure cable tension by a direct force sensor or cable vibration accelerometer sensor, thereby inferring the indirect cable tension through the cable vibration frequency. To translate cable-stayed vibration acceleration data to real-time tension requires some necessary calculations and programming. This paper introduces the algorithm, labview program that converts cable-stayed vibration acceleration data to real-time tension. The research results are applied to the monitoring system of Tran Thi Ly cable-stayed bridge and Song Hieu cable-stayed bridge in Vietnam.Keywords: cable-stayed bridge, cable fore, structural heath monitoring (SHM), fast fourie transformed (FFT), real time, vibrations
Procedia PDF Downloads 711736 Optimization of Sequential Thermophilic Bio-Hydrogen/Methane Production from Mono-Ethylene Glycol via Anaerobic Digestion: Impact of Inoculum to Substrate Ratio and N/P Ratio
Authors: Ahmed Elreedy, Ahmed Tawfik
Abstract:
This investigation aims to assess the effect of inoculum to substrate ratio (ISR) and nitrogen to phosphorous balance on simultaneous biohydrogen and methane production from anaerobic decomposition of mono-ethylene glycol (MEG). Different ISRs were applied in the range between 2.65 and 13.23 gVSS/gCOD, whereas the tested N/P ratios were changed from 4.6 to 8.5; both under thermophilic conditions (55°C). The maximum obtained methane and hydrogen yields (MY and HY) of 151.86±10.8 and 22.27±1.1 mL/gCODinitial were recorded at ISRs of 5.29 and 3.78 gVSS/gCOD, respectively. Unlikely, the ammonification process, in terms of net ammonia produced, was found to be ISR and COD/N ratio dependent, reaching its peak value of 515.5±31.05 mgNH4-N/L at ISR and COD/N ratio of 13.23 gVSS/gCOD and 11.56. The optimum HY was enhanced by more than 1.45-fold with declining N/P ratio from 8.5 to 4.6; whereas, the MY was improved (1.6-fold), while increasing N/P ratio from 4.6 to 5.5 with no significant impact at N/P ratio of 8.5. The results obtained revealed that the methane production was strongly influenced by initial ammonia, compared to initial phosphate. Likewise, the generation of ammonia was markedly deteriorated from 535.25±41.5 to 238.33±17.6 mgNH4-N/L with increasing N/P ratio from 4.6 to 8.5. The kinetic study using Modified Gompertz equation was successfully fitted to the experimental outputs (R2 > 0.9761).Keywords: mono-ethylene glycol, biohydrogen and methane, inoculum to substrate ratio, nitrogen to phosphorous balance, ammonification
Procedia PDF Downloads 3821735 Metrics and Methods for Improving Resilience in Agribusiness Supply Chains
Authors: Golnar Behzadi, Michael O'Sullivan, Tava Olsen, Abraham Zhang
Abstract:
By definition, increasing supply chain resilience improves the supply chain’s ability to return to normal, or to an even more desirable situation, quickly and efficiently after being hit by a disruption. This is especially critical in agribusiness supply chains where the products are perishable and have a short life-cycle. In this paper, we propose a resilience metric to capture and improve the recovery process in terms of both performance and time, of an agribusiness supply chain following either supply or demand-side disruption. We build a model that determines optimal supply chain recovery planning decisions and selects the best resilient strategies that minimize the loss of profit during the recovery time window. The model is formulated as a two-stage stochastic mixed-integer linear programming problem and solved with a branch-and-cut algorithm. The results show that the optimal recovery schedule is highly dependent on the duration of the time-window allowed for recovery. In addition, the profit loss during recovery is reduced by utilizing the proposed resilient actions.Keywords: agribusiness supply chain, recovery, resilience metric, risk management
Procedia PDF Downloads 3971734 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip
Authors: Sina Saadati
Abstract:
Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence
Procedia PDF Downloads 1031733 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code
Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare
Abstract:
Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.Keywords: concatenated coding, low–density parity–check codes, array code, error floors
Procedia PDF Downloads 3561732 Relation between Physical and Mechanical Properties of Concrete Paving Stones Using Neuro-Fuzzy Approach
Authors: Erion Luga, Aksel Seitllari, Kemal Pervanqe
Abstract:
This study investigates the relation between physical and mechanical properties of concrete paving stones using neuro-fuzzy approach. For this purpose 200 samples of concrete paving stones were selected randomly from different sources. The first phase included the determination of physical properties of the samples such as water absorption capacity, porosity and unit weight. After that the indirect tensile strength test and compressive strength test of the samples were performed. İn the second phase, adaptive neuro-fuzzy approach was employed to simulate nonlinear mapping between the above mentioned physical properties and mechanical properties of paving stones. The neuro-fuzzy models uses Sugeno type fuzzy inference system. The models parameters were adapted using hybrid learning algorithm and input space was fuzzyfied by considering grid partitioning. It is concluded based on the observed data and the estimated data through ANFIS models that neuro-fuzzy system exhibits a satisfactory performance.Keywords: paving stones, physical properties, mechanical properties, ANFIS
Procedia PDF Downloads 3421731 Cost Effective Real-Time Image Processing Based Optical Mark Reader
Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar
Abstract:
In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.Keywords: OMR, image processing, hough circle trans-form, interpolation, detection, binary thresholding
Procedia PDF Downloads 1731730 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the PTH percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.Keywords: reliability, accelerated life testing, cumulative exposure model, Bayesian estimation, progressive type-I censoring, Weibull distribution
Procedia PDF Downloads 5051729 Vortices Structure in Internal Laminar and Turbulent Flows
Authors: Farid Gaci, Zoubir Nemouchi
Abstract:
A numerical study of laminar and turbulent fluid flows in 90° bend of square section was carried out. Three-dimensional meshes, based on hexahedral cells, were generated. The QUICK scheme was employed to discretize the convective term in the transport equations. The SIMPLE algorithm was adopted to treat the velocity-pressure coupling. The flow structure obtained showed interesting features such as recirculation zones and counter-rotating pairs of vortices. The performance of three different turbulence models was evaluated: the standard k- ω model, the SST k-ω model and the Reynolds Stress Model (RSM). Overall, it was found that, the multi-equation model performed better than the two equation models. In fact, the existence of four pairs of counter rotating cells, in the straight duct upstream of the bend, were predicted by the RSM closure but not by the standard eddy viscosity model nor the SST k-ω model. The analysis of the results led to a better understanding of the induced three dimensional secondary flows and the behavior of the local pressure coefficient and the friction coefficient.Keywords: curved duct, counter-rotating cells, secondary flow, laminar, turbulent
Procedia PDF Downloads 3361728 SAR and B₁ Considerations for Multi-Nuclear RF Body Coils
Authors: Ria Forner
Abstract:
Introduction: Due to increases in the SNR at 7T and above, it becomes more favourable to make use of X-nuclear imaging. Integrated body coils tuned to 120MHz for 31P, 79MHz for 23Na, and 75 MHz for 13C at 7T were simulated with a human male, female, or child body model to assess strategies of use for metabolic MR imaging in the body. Methods: B1 and SAR efficiencies in the heart, liver, spleen, and kidneys were assessed using numerical simulations over the three frequencies with phase shimming. Results: B1+ efficiency is highly variable over the different organs, particularly for the highest frequency; however, local SAR efficiency remains relatively constant over the frequencies in all subjects. Although the optimal phase settings vary, one generic phase setting can be identified for each frequency at which the penalty in B1+ is at a max of 10%. Discussion: The simulations provide practical strategies for power optimization, B1 management, and maintaining safety. As expected, the B1 field is similar at 75MHz and 79MHz, but reduced at 120MHz. However, the B1 remains relatively constant when normalised by the square root of the peak local SAR. This is in contradiction to generalized SAR considerations of 1H MRI at different field strengths, which is defined by global SAR instead. Conclusion: Although the B1 decreases with frequency, SAR efficiency remains constant throughout the investigated frequency range. It is possible to shim the body coil to obtain a maximum of 10% extra B1+ in a specific organ in a body when compared to a generic setting.Keywords: birdcage, multi-nuclear, B1 shimming, 7 Tesla MRI, liver, kidneys, heart, spleen
Procedia PDF Downloads 671727 Hierarchical Cluster Analysis of Raw Milk Samples Obtained from Organic and Conventional Dairy Farming in Autonomous Province of Vojvodina, Serbia
Authors: Lidija Jevrić, Denis Kučević, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Milica Karadžić
Abstract:
In the present study, the Hierarchical Cluster Analysis (HCA) was applied in order to determine the differences between the milk samples originating from a conventional dairy farm (CF) and an organic dairy farm (OF) in AP Vojvodina, Republic of Serbia. The clustering was based on the basis of the average values of saturated fatty acids (SFA) content and unsaturated fatty acids (UFA) content obtained for every season. Therefore, the HCA included the annual SFA and UFA content values. The clustering procedure was carried out on the basis of Euclidean distances and Single linkage algorithm. The obtained dendrograms indicated that the clustering of UFA in OF was much more uniform compared to clustering of UFA in CF. In OF, spring stands out from the other months of the year. The same case can be noticed for CF, where winter is separated from the other months. The results could be expected because the composition of fatty acids content is greatly influenced by the season and nutrition of dairy cows during the year.Keywords: chemometrics, clustering, food engineering, milk quality
Procedia PDF Downloads 2811726 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data
Authors: Adarsh Shroff
Abstract:
Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.Keywords: big data, map reduce, incremental processing, iterative computation
Procedia PDF Downloads 3511725 An Alternative Method for Computing Clothoids
Authors: Gerardo Casal, Miguel E. Vázquez-Méndez
Abstract:
The clothoid (also known as Cornu spiral or Euler spiral) is a curve that is characterized because its curvature is proportional to its length. This property makes that it would be widely used as transition curve for designing the layout of roads and railway tracks. In this work, from the geometrical property characterizing the clothoid, its parametric equations are obtained and two algorithms to compute it are compared. The first (classical), is widely used in Surveying Schools and it is based on the use of explicit formulas obtained from Taylor expansions of sine and cosine functions. The second one (alternative) is a very simple algorithm, based on the numerical solution of the initial value problems giving the clothoid parameterization. Both methods are compared in some typical surveying problems. The alternative method does not use complex formulas and so it is conceptually very simple and easy to apply. It gives good results, even if the classical method goes wrong (if the quotient between length and radius of curvature is high), needs no subsequent translations nor rotations and, consequently, it seems an efficient tool for designing the layout of roads and railway tracks.Keywords: transition curves, railroad and highway engineering, Runge-Kutta methods
Procedia PDF Downloads 2831724 Hierarchical Queue-Based Task Scheduling with CloudSim
Authors: Wanqing You, Kai Qian, Ying Qian
Abstract:
The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.Keywords: hierarchical queue, load balancing, CloudSim, information technology
Procedia PDF Downloads 4221723 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks
Procedia PDF Downloads 2221722 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.Keywords: integral differential equations, jump–diffusion model, American options, rational approximation
Procedia PDF Downloads 1201721 A Cloud-Based Spectrum Database Approach for Licensed Shared Spectrum Access
Authors: Hazem Abd El Megeed, Mohamed El-Refaay, Norhan Magdi Osman
Abstract:
Spectrum scarcity is a challenging obstacle in wireless communications systems. It hinders the introduction of innovative wireless services and technologies that require larger bandwidth comparing to legacy technologies. In addition, the current worldwide allocation of radio spectrum bands is already congested and can not afford additional squeezing or optimization to accommodate new wireless technologies. This challenge is a result of accumulative contributions from different factors that will be discussed later in this paper. One of these factors is the radio spectrum allocation policy governed by national regulatory authorities nowadays. The framework for this policy allocates specified portion of radio spectrum to a particular wireless service provider on exclusive utilization basis. This allocation is executed according to technical specification determined by the standard bodies of each Radio Access Technology (RAT). Dynamic access of spectrum is a framework for flexible utilization of radio spectrum resources. In this framework there is no exclusive allocation of radio spectrum and even the public safety agencies can share their spectrum bands according to a governing policy and service level agreements. In this paper, we explore different methods for accessing the spectrum dynamically and its associated implementation challenges.Keywords: licensed shared access, cognitive radio, spectrum sharing, spectrum congestion, dynamic spectrum access, spectrum database, spectrum trading, reconfigurable radio systems, opportunistic spectrum allocation (OSA)
Procedia PDF Downloads 4321720 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration
Authors: Nooshin Salari, Viliam Makis
Abstract:
In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.Keywords: reliability, maintenance optimization, semi-Markov decision process, production
Procedia PDF Downloads 1651719 Analysis of Influence of Geometrical Set of Nozzles on Aerodynamic Drag Level of a Hero’s Based Steam Turbine
Authors: Mateusz Paszko, Miroslaw Wendeker, Adam Majczak
Abstract:
High temperature waste energy offers a number of management options. The most common energy recuperation systems, that are actually used to utilize energy from the high temperature sources are steam turbines working in a high pressure and temperature closed cycles. Due to the high costs of production of energy recuperation systems, especially rotary turbine discs equipped with blades, currently used solutions are limited in use with waste energy sources of temperature below 100 °C. This study presents the results of simulating the flow of the water vapor in various configurations of flow ducts in a reaction steam turbine based on Hero’s steam turbine. The simulation was performed using a numerical model and the ANSYS Fluent software. Simulation computations were conducted with use of the water vapor as an internal agent powering the turbine, which is fully safe for an environment in case of a device failure. The conclusions resulting from the conducted numerical computations should allow for optimization of the flow ducts geometries, in order to achieve the greatest possible efficiency of the turbine. It is expected that the obtained results should be useful for further works related to the development of the final version of a low drag steam turbine dedicated for low cost energy recuperation systems.Keywords: energy recuperation, CFD analysis, waste energy, steam turbine
Procedia PDF Downloads 2101718 Leveraging Digital Transformation Initiatives and Artificial Intelligence to Optimize Readiness and Simulate Mission Performance across the Fleet
Authors: Justin Woulfe
Abstract:
Siloed logistics and supply chain management systems throughout the Department of Defense (DOD) has led to disparate approaches to modeling and simulation (M&S), a lack of understanding of how one system impacts the whole, and issues with “optimal” solutions that are good for one organization but have dramatic negative impacts on another. Many different systems have evolved to try to understand and account for uncertainty and try to reduce the consequences of the unknown. As the DoD undertakes expansive digital transformation initiatives, there is an opportunity to fuse and leverage traditionally disparate data into a centrally hosted source of truth. With a streamlined process incorporating machine learning (ML) and artificial intelligence (AI), advanced M&S will enable informed decisions guiding program success via optimized operational readiness and improved mission success. One of the current challenges is to leverage the terabytes of data generated by monitored systems to provide actionable information for all levels of users. The implementation of a cloud-based application analyzing data transactions, learning and predicting future states from current and past states in real-time, and communicating those anticipated states is an appropriate solution for the purposes of reduced latency and improved confidence in decisions. Decisions made from an ML and AI application combined with advanced optimization algorithms will improve the mission success and performance of systems, which will improve the overall cost and effectiveness of any program. The Systecon team constructs and employs model-based simulations, cutting across traditional silos of data, aggregating maintenance, and supply data, incorporating sensor information, and applying optimization and simulation methods to an as-maintained digital twin with the ability to aggregate results across a system’s lifecycle and across logical and operational groupings of systems. This coupling of data throughout the enterprise enables tactical, operational, and strategic decision support, detachable and deployable logistics services, and configuration-based automated distribution of digital technical and product data to enhance supply and logistics operations. As a complete solution, this approach significantly reduces program risk by allowing flexible configuration of data, data relationships, business process workflows, and early test and evaluation, especially budget trade-off analyses. A true capability to tie resources (dollars) to weapon system readiness in alignment with the real-world scenarios a warfighter may experience has been an objective yet to be realized to date. By developing and solidifying an organic capability to directly relate dollars to readiness and to inform the digital twin, the decision-maker is now empowered through valuable insight and traceability. This type of educated decision-making provides an advantage over the adversaries who struggle with maintaining system readiness at an affordable cost. The M&S capability developed allows program managers to independently evaluate system design and support decisions by quantifying their impact on operational availability and operations and support cost resulting in the ability to simultaneously optimize readiness and cost. This will allow the stakeholders to make data-driven decisions when trading cost and readiness throughout the life of the program. Finally, sponsors are available to validate product deliverables with efficiency and much higher accuracy than in previous years.Keywords: artificial intelligence, digital transformation, machine learning, predictive analytics
Procedia PDF Downloads 1601717 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners
Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda
Abstract:
In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner
Procedia PDF Downloads 1571716 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment
Authors: Loyd R. Hook, Maryam Moharek
Abstract:
With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.Keywords: strategic planning, autonomous, aircraft, deconfliction
Procedia PDF Downloads 95