Search results for: whale optimization algorithm
4887 Study of the Energy Levels in the Structure of the Laser Diode GaInP
Authors: Abdelali Laid, Abid Hamza, Zeroukhi Houari, Sayah Naimi
Abstract:
This work relates to the study of the energy levels and the optimization of the Parameter intrinsic (a number of wells and their widths, width of barrier of potential, index of refraction etc.) and extrinsic (temperature, pressure) in the Structure laser diode containing the structure GaInP. The methods of calculation used; - method of the empirical pseudo potential to determine the electronic structures of bands, - graphic method for optimization. The found results are in concord with those of the experiment and the theory.Keywords: semi-conductor, GaInP/AlGaInP, pseudopotential, energy, alliages
Procedia PDF Downloads 4924886 Optimal Emergency Shipment Policy for a Single-Echelon Periodic Review Inventory System
Authors: Saeed Poormoaied, Zumbul Atan
Abstract:
Emergency shipments provide a powerful mechanism to alleviate the risk of imminent stock-outs and can result in substantial benefits in an inventory system. Customer satisfaction and high service level are immediate consequences of utilizing emergency shipments. In this paper, we consider a single-echelon periodic review inventory system consisting of a single local warehouse, being replenished from a central warehouse with ample capacity in an infinite horizon setting. Since the structure of the optimal policy appears to be complicated, we analyze this problem under an order-up-to-S inventory control policy framework, the (S, T) policy, with the emergency shipment consideration. In each period of the periodic review policy, there is a single opportunity at any point of time for the emergency shipment so that in case of stock-outs, an emergency shipment is requested. The goal is to determine the timing and amount of the emergency shipment during a period (emergency shipment policy) as well as the base stock periodic review policy parameters (replenishment policy). We show that how taking advantage of having an emergency shipment during periods improves the performance of the classical (S, T) policy, especially when fixed and unit emergency shipment costs are small. Investigating the structure of the objective function, we develop an exact algorithm for finding the optimal solution. We also provide a heuristic and an approximation algorithm for the periodic review inventory system problem. The experimental analyses indicate that the heuristic algorithm is computationally more efficient than the approximation algorithm, but in terms of the solution efficiency, the approximation algorithm performs very well. We achieve up to 13% cost savings in the (S, T) policy if we apply the proposed emergency shipment policy. Moreover, our computational results reveal that the approximated solution is often within 0.21% of the globally optimal solution.Keywords: emergency shipment, inventory, periodic review policy, approximation algorithm.
Procedia PDF Downloads 1414885 Optimization of Processing Parameters of Acrylonitrile–Butadiene–Styrene Sheets Integrated by Taguchi Method
Authors: Fatemeh Sadat Miri, Morteza Ehsani, Seyed Farshid Hosseini
Abstract:
The present research is concerned with the optimization of extrusion parameters of ABS sheets by the Taguchi experimental design method. In this design method, three parameters of % recycling ABS, processing temperature and degassing time on mechanical properties, hardness, HDT, and color matching of ABS sheets were investigated. The variations of this research are the dosage of recycling ABS, processing temperature, and degassing time. According to experimental test data, the highest level of tensile strength and HDT belongs to the sample with 5% recycling ABS, processing temperature of 230°C, and degassing time of 3 hours. Additionally, the minimum level of MFI and color matching belongs to this sample, too. The present results are in good agreement with the Taguchi method. Based on the outcomes of the Taguchi design method, degassing time has the most effect on the mechanical properties of ABS sheets.Keywords: ABS, process optimization, Taguchi, mechanical properties
Procedia PDF Downloads 734884 Energy Benefits of Urban Platooning with Self-Driving Vehicles
Authors: Eduardo F. Mello, Peter H. Bauer
Abstract:
The primary focus of this paper is the generation of energy-optimal speed trajectories for heterogeneous electric vehicle platoons in urban driving conditions. Optimal speed trajectories are generated for individual vehicles and for an entire platoon under the assumption that they can be executed without errors, as would be the case for self-driving vehicles. It is then shown that the optimization for the “average vehicle in the platoon” generates similar transportation energy savings to optimizing speed trajectories for each vehicle individually. The introduced approach only requires the lead vehicle to run the optimization software while the remaining vehicles are only required to have adaptive cruise control capability. The achieved energy savings are typically between 30% and 50% for stop-to-stop segments in cities. The prime motivation of urban platooning comes from the fact that urban platoons efficiently utilize the available space and the minimization of transportation energy in cities is important for many reasons, i.e., for environmental, power, and range considerations.Keywords: electric vehicles, energy efficiency, optimization, platooning, self-driving vehicles, urban traffic
Procedia PDF Downloads 1824883 Design-Analysis and Optimization of 10 MW Permanent Magnet Surface Mounted Off-Shore Wind Generator
Authors: Mamidi Ramakrishna Rao, Jagdish Mamidi
Abstract:
With advancing technology, the market environment for wind power generation systems has become highly competitive. The industry has been moving towards higher wind generator power ratings, in particular, off-shore generator ratings. Current off-shore wind turbine generators are in the power range of 10 to 12 MW. Unlike traditional induction motors, slow-speed permanent magnet surface mounted (PMSM) high-power generators are relatively challenging and designed differently. In this paper, PMSM generator design features have been discussed and analysed. The focus attention is on armature windings, harmonics, and permanent magnet. For the power ratings under consideration, the generator air-gap diameters are in the range of 8 to 10 meters, and active material weigh ~60 tons and above. Therefore, material weight becomes one of the critical parameters. Particle Swarm Optimization (PSO) technique is used for weight reduction and performance improvement. Four independent variables have been considered, which are air gap diameter, stack length, magnet thickness, and winding current density. To account for core and teeth saturation, preventing demagnetization effects due to short circuit armature currents, and maintaining minimum efficiency, suitable penalty functions have been applied. To check for performance satisfaction, a detailed analysis and 2D flux plotting are done for the optimized design.Keywords: offshore wind generator, PMSM, PSO optimization, design optimization
Procedia PDF Downloads 1554882 A Digital Filter for Symmetrical Components Identification
Authors: Khaled M. El-Naggar
Abstract:
This paper presents a fast and efficient technique for monitoring and supervising power system disturbances generated due to dynamic performance of power systems or faults. Monitoring power system quantities involve monitoring fundamental voltage, current magnitudes, and their frequencies as well as their negative and zero sequence components under different operating conditions. The proposed technique is based on simulated annealing optimization technique (SA). The method uses digital set of measurements for the voltage or current waveforms at power system bus to perform the estimation process digitally. The algorithm is tested using different simulated data to monitor the symmetrical components of power system waveforms. Different study cases are considered in this work. Effects of number of samples, sampling frequency and the sample window size are studied. Results are reported and discussed.Keywords: estimation, faults, measurement, symmetrical components
Procedia PDF Downloads 4654881 Consideration of Uncertainty in Engineering
Authors: A. Mohammadi, M. Moghimi, S. Mohammadi
Abstract:
Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method
Procedia PDF Downloads 4144880 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join
Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel
Abstract:
Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.Keywords: map reduce, hadoop, semi join, two way join
Procedia PDF Downloads 5134879 Fault Diagnosis of Manufacturing Systems Using AntTreeStoch with Parameter Optimization by ACO
Authors: Ouahab Kadri, Leila Hayet Mouss
Abstract:
In this paper, we present three diagnostic modules for complex and dynamic systems. These modules are based on three ant colony algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. We chose these algorithms for their simplicity and their wide application range. However, we cannot use these algorithms in their basement forms as they have several limitations. To use these algorithms in a diagnostic system, we have proposed three variants. We have tested these algorithms on datasets issued from two industrial systems, which are clinkering system and pasteurization system.Keywords: ant colony algorithms, complex and dynamic systems, diagnosis, classification, optimization
Procedia PDF Downloads 2984878 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme
Authors: Andrey V. Timofeev, Dmitry V. Egorov
Abstract:
This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier
Procedia PDF Downloads 4664877 Taguchi Method for Analyzing a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method
Procedia PDF Downloads 1874876 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 3174875 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1504874 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 1624873 A Computational Study on Flow Separation Control of Humpback Whale Inspired Sinusoidal Hydrofoils
Authors: J. Joy, T. H. New, I. H. Ibrahim
Abstract:
A computational study on bio-inspired NACA634-021 hydrofoils with leading-edge protuberances has been carried out to investigate their hydrodynamic flow control characteristics at a Reynolds number of 14,000 and different angles-of-attack. The numerical simulations were performed using ANSYS FLUENT and based on Reynolds-Averaged Navier-Stokes (RANS) solver mode incorporated with k-ω Shear Stress Transport (SST) turbulence model. The results obtained indicate varying flow phenomenon along the peaks and troughs over the span of the hydrofoils. Compared to the baseline hydrofoil with no leading-edge protuberances, the leading-edge modified hydrofoils tend to reduce flow separation extents along the peak regions. In contrast, there are increased flow separations in the trough regions of the hydrofoil with leading-edge protuberances. Interestingly, it was observed that dissimilar flow separation behaviour is produced along different peak- or trough-planes along the hydrofoil span, even though the troughs or peaks are physically similar at each interval for a particular hydrofoil. Significant interactions between adjacent flow structures produced by the leading-edge protuberances have also been observed. These flow interactions are believed to be responsible for the dissimilar flow separation behaviour along physically similar peak- or trough-planes.Keywords: computational fluid dynamics, flow separation control, hydrofoils, leading-edge protuberances
Procedia PDF Downloads 3284872 Design an Intelligent Fire Detection System Based on Neural Network and Particle Swarm Optimization
Authors: Majid Arvan, Peyman Beygi, Sina Rokhsati
Abstract:
In-time detection of fire in buildings is of great importance. Employing intelligent methods in data processing in fire detection systems leads to a significant reduction of fire damage at lowest cost. In this paper, the raw data obtained from the fire detection sensor networks in buildings is processed by using intelligent methods based on neural networks and the likelihood of fire happening is predicted. In order to enhance the quality of system, the noise in the sensor data is reduced by analyzing wavelets and applying SVD technique. Meanwhile, the proposed neural network is trained using particle swarm optimization (PSO). In the simulation work, the data is collected from sensor network inside the room and applied to the proposed network. Then the outputs are compared with conventional MLP network. The simulation results represent the superiority of the proposed method over the conventional one.Keywords: intelligent fire detection, neural network, particle swarm optimization, fire sensor network
Procedia PDF Downloads 3804871 An Android Application for ECG Monitoring and Evaluation Using Pan-Tompkins Algorithm
Authors: Cebrail Çiflikli, Emre Öner Tartan
Abstract:
Parallel to the fast worldwide increase of elderly population and spreading unhealthy life habits, there is a significant rise in the number of patients and health problems. The supervision of people who have health problems and oversight in detection of people who have potential risks, bring a considerable cost to health system and increase workload of physician. To provide an efficient solution to this problem, in the recent years mobile applications have shown their potential for wide usage in health monitoring. In this paper we present an Android mobile application that records and evaluates ECG signal using Pan-Tompkins algorithm for QRS detection. The application model includes an alarm mechanism that is proposed to be used for sending message including abnormality information and location information to health supervisor.Keywords: Android mobile application, ECG monitoring, QRS detection, Pan-Tompkins Algorithm
Procedia PDF Downloads 2334870 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction
Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey
Abstract:
In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization
Procedia PDF Downloads 3444869 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 4524868 An Experimental Investigation of the Effect of Control Algorithm on the Energy Consumption and Temperature Distribution of a Household Refrigerator
Authors: G. Peker, Tolga N. Aynur, E. Tinar
Abstract:
In order to determine the energy consumption level and cooling characteristics of a domestic refrigerator controlled with various cooling system algorithms, a side by side type (SBS) refrigerator was tested in temperature and humidity controlled chamber conditions. Two different control algorithms; so-called drop-in and frequency controlled variable capacity compressor algorithms, were tested on the same refrigerator. Refrigerator cooling characteristics were investigated for both cases and results were compared with each other. The most important comparison parameters between the two algorithms were taken as; temperature distribution, energy consumption, evaporation and condensation temperatures, and refrigerator run times. Standard energy consumption tests were carried out on the same appliance and resulted in almost the same energy consumption levels, with a difference of %1,5. By using these two different control algorithms, the power consumptions character/profile of the refrigerator was found to be similar. By following the associated energy measurement standard, the temperature values of the test packages were measured to be slightly higher for the frequency controlled algorithm compared to the drop-in algorithm. This paper contains the details of this experimental study conducted with different cooling control algorithms and compares the findings based on the same standard conditions.Keywords: control algorithm, cooling, energy consumption, refrigerator
Procedia PDF Downloads 3734867 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Authors: Ahmed Elrewainy
Abstract:
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.Keywords: basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets
Procedia PDF Downloads 1954866 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 3554865 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 4684864 Density-based Denoising of Point Cloud
Authors: Faisal Zaman, Ya Ping Wong, Boon Yian Ng
Abstract:
Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this, we present a novel approach using modified kernel density estimation (KDE) technique with bilateral filtering to remove noisy points and outliers. First we present a method for estimating optimal bandwidth of multivariate KDE using particle swarm optimization technique which ensures the robust performance of density estimation. Then we use mean-shift algorithm to find the local maxima of the density estimation which gives the centroid of the clusters. Then we compute the distance of a certain point from the centroid. Points belong to outliers then removed by automatic thresholding scheme which yields an accurate and economical point surface. The experimental results show that our approach comparably robust and efficient.Keywords: point preprocessing, outlier removal, surface reconstruction, kernel density estimation
Procedia PDF Downloads 3444863 Adaptive Envelope Protection Control for the below and above Rated Regions of Wind Turbines
Authors: Mustafa Sahin, İlkay Yavrucuk
Abstract:
This paper presents a wind turbine envelope protection control algorithm that protects Variable Speed Variable Pitch (VSVP) wind turbines from damage during operation throughout their below and above rated regions, i.e. from cut-in to cut-out wind speed. The proposed approach uses a neural network that can adapt to turbines and their operating points. An algorithm monitors instantaneous wind and turbine states, predicts a wind speed that would push the turbine to a pre-defined envelope limit and, when necessary, realizes an avoidance action. Simulations are realized using the MS Bladed Wind Turbine Simulation Model for the NREL 5 MW wind turbine equipped with baseline controllers. In all simulations, through the proposed algorithm, it is observed that the turbine operates safely within the allowable limit throughout the below and above rated regions. Two example cases, adaptations to turbine operating points for the below and above rated regions and protections are investigated in simulations to show the capability of the proposed envelope protection system (EPS) algorithm, which reduces excessive wind turbine loads and expectedly increases the turbine service life.Keywords: adaptive envelope protection control, limit detection and avoidance, neural networks, ultimate load reduction, wind turbine power control
Procedia PDF Downloads 1364862 Vector Quantization Based on Vector Difference Scheme for Image Enhancement
Authors: Biji Jacob
Abstract:
Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.Keywords: codebook, image enhancement, vector difference, vector quantization
Procedia PDF Downloads 2674861 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications
Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu
Abstract:
On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.Keywords: cloud computing, CPU intensive applications, resource optimization, strategy
Procedia PDF Downloads 2784860 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System
Authors: Tahsin A. H. Nishat, Raquib Ahsan
Abstract:
Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system
Procedia PDF Downloads 1374859 The Bernstein Expansion for Exponentials in Taylor Functions: Approximation of Fixed Points
Authors: Tareq Hamadneh, Jochen Merker, Hassan Al-Zoubi
Abstract:
Bernstein's expansion for exponentials in Taylor functions provides lower and upper optimization values for the range of its original function. these values converge to the original functions if the degree is elevated or the domain subdivided. Taylor polynomial can be applied so that the exponential is a polynomial of finite degree over a given domain. Bernstein's basis has two main properties: its sum equals 1, and positive for all x 2 (0; 1). In this work, we prove the existence of fixed points for exponential functions in a given domain using the optimization values of Bernstein. The Bernstein basis of finite degree T over a domain D is defined non-negatively. Any polynomial p of degree t can be expanded into the Bernstein form of maximum degree t ≤ T, where we only need to compute the coefficients of Bernstein in order to optimize the original polynomial. The main property is that p(x) is approximated by the minimum and maximum Bernstein coefficients (Bernstein bound). If the bound is contained in the given domain, then we say that p(x) has fixed points in the same domain.Keywords: Bernstein polynomials, Stability of control functions, numerical optimization, Taylor function
Procedia PDF Downloads 1354858 Mathematical Model and Algorithm for the Berth and Yard Resource Allocation at Seaports
Authors: Ming Liu, Zhihui Sun, Xiaoning Zhang
Abstract:
This paper studies a deterministic container transportation problem, jointly optimizing the berth allocation, quay crane assignment and yard storage allocation at container ports. The problem is formulated as an integer program to coordinate the decisions. Because of the large scale, it is then transformed into a set partitioning formulation, and a framework of branchand- price algorithm is provided to solve it.Keywords: branch-and-price, container terminal, joint scheduling, maritime logistics
Procedia PDF Downloads 293