Search results for: battery grading algorithm
3925 Comparison of Parallel CUDA and OpenMP Implementations of Memetic Algorithms for Solving Optimization Problems
Authors: Jason Digalakis, John Cotronis
Abstract:
Memetic algorithms (MAs) are useful for solving optimization problems. It is quite difficult to search the search space of the optimization problem with large dimensions. There is a challenge to use all the cores of the system. In this study, a sequential implementation of the memetic algorithm is converted into a concurrent version, which is executed on the cores of both CPU and GPU. For this reason, CUDA and OpenMP libraries are operated on the parallel algorithm to make a concurrent execution on CPU and GPU, respectively. The aim of this study is to compare CPU and GPU implementation of the memetic algorithm. For this purpose, fourteen benchmark functions are selected as test problems. The obtained results indicate that our approach leads to speedups up to five thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have the potential to acceleration of MAs and allow them to solve much more complex tasks.Keywords: memetic algorithm, CUDA, GPU-based memetic algorithm, open multi processing, multimodal functions, unimodal functions, non-linear optimization problems
Procedia PDF Downloads 1013924 The Potential of 48V HEV in Real Driving Operation
Authors: Mark Schudeleit, Christian Sieg, Ferit Küçükay
Abstract:
This publication focuses on the limits and potentials of 48V hybrid systems, which are especially due to the cost advantages an attractive alternative, compared to established high volt-age HEVs and thus will gain relevant market shares in the future. Firstly, at market overview is given which shows the current known 48V hybrid concepts and demonstrators. These topologies will be analyzed and evaluated regarding the system power and the battery capacity as well as their implemented hybrid functions. The potential in fuel savings and CO2 reduction is calculated followed by the customer-relevant dimensioning of the electric motor and the battery. For both measured data of the real customer operation is used. Subsequently, the CO2 saving potentials of the customer-oriented dimensioned powertrain will be presented for the NEDC and the customer operation. With a comparison of the newly defined drivetrain with existing 48V systems the question can be answered whether current systems are dimensioned optimally for the customer operation or just for legislated driving cycles.Keywords: 48V hybrid systems, market comparison, requirements and potentials in customer operation, customer-oriented dimensioning, CO2 savings
Procedia PDF Downloads 5493923 Creation of S-Box in Blowfish Using AES
Authors: C. Rekha, G. N. Krishnamurthy
Abstract:
This paper attempts to develop a different approach for key scheduling algorithm which uses both Blowfish and AES algorithms. The main drawback of Blowfish algorithm is, it takes more time to create the S-box entries. To overcome this, we are replacing process of S-box creation in blowfish, by using key dependent S-box creation from AES without affecting the basic operation of blowfish. The method proposed in this paper uses good features of blowfish as well as AES and also this paper demonstrates the performance of blowfish and new algorithm by considering different aspects of security namely Encryption Quality, Key Sensitivity, and Correlation of horizontally adjacent pixels in an encrypted image.Keywords: AES, blowfish, correlation coefficient, encryption quality, key sensitivity, s-box
Procedia PDF Downloads 2263922 Nonuniformity Correction Technique in Infrared Video Using Feedback Recursive Least Square Algorithm
Authors: Flavio O. Torres, Maria J. Castilla, Rodrigo A. Augsburger, Pedro I. Cachana, Katherine S. Reyes
Abstract:
In this paper, we present a scene-based nonuniformity correction method using a modified recursive least square algorithm with a feedback system on the updates. The feedback is designed to remove impulsive noise contamination images produced by a recursive least square algorithm by measuring the output of the proposed algorithm. The key advantage of the method is based on its capacity to estimate detectors parameters and then compensate for impulsive noise contamination image in a frame by frame basics. We define the algorithm and present several experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published recursive least square-based methods. We show that the proposed method removes impulsive noise contamination image.Keywords: infrared focal plane arrays, infrared imaging, least mean square, nonuniformity correction
Procedia PDF Downloads 1433921 3D Mesh Coarsening via Uniform Clustering
Authors: Shuhua Lai, Kairui Chen
Abstract:
In this paper, we present a fast and efficient mesh coarsening algorithm for 3D triangular meshes. Theis approach can be applied to very complex 3D meshes of arbitrary topology and with millions of vertices. The algorithm is based on the clustering of the input mesh elements, which divides the faces of an input mesh into a given number of clusters for clustering purpose by approximating the Centroidal Voronoi Tessellation of the input mesh. Once a clustering is achieved, it provides us an efficient way to construct uniform tessellations, and therefore leads to good coarsening of polygonal meshes. With proliferation of 3D scanners, this coarsening algorithm is particularly useful for reverse engineering applications of 3D models, which in many cases are dense, non-uniform, irregular and arbitrary topology. Examples demonstrating effectiveness of the new algorithm are also included in the paper.Keywords: coarsening, mesh clustering, shape approximation, mesh simplification
Procedia PDF Downloads 3803920 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design
Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic
Abstract:
Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec
Procedia PDF Downloads 1653919 Preparation and Performance of Polyphenylene Oxide-Based Anion Exchange Membrane for Vanadium Redox Flow Battery
Authors: Mi-Jung Park, Min-Hwa Lim, Ho-Young Jung
Abstract:
A polyphenylene oxide (PPO)-based anion exchange membrane based on the functionalization of bromomethylated PPO using 1-methylimdazole was fabricated for vanadium redox flow application. The imidazolium-bromomethylated PPO (Im-bPPO) showed lower permeability VO2+ ions (2.9×10⁻¹⁴ m²/sec), compared to Nafion 212 (2.3×10⁻¹² m²/sec) and FAP-450 (7.9×10⁻¹⁴ m²/sec). Even though the Im-bPPO membrane has higher permeability, the energy efficiency of the VRFB with the Im-bPPO membrane was slightly lower than that of Nafion and FAP-450. The Im-bPPO membrane exhibits good voltage efficiency compared to FAP-450 and Nafion 212 because of its better ion conductivity. The Im-bPPo membrane showed up good performance, but a decline in performance at later cycles was observed.Keywords: anion exchange membranes, vanadium redox flow battery, polyphenylene oxide, energy efficiency (EE)
Procedia PDF Downloads 3173918 Electrocatalysts for Lithium-Sulfur Energy Storage Systems
Authors: Mirko Ante, Şeniz Sörgel, Andreas Bund
Abstract:
Li-S- (Lithium-Sulfur-) battery systems provide very high specific gravimetric energy (2600 Wh/kg) and volumetric energy density (2800Wh/l). Hence, Li-S batteries are one of the key technologies for both the upcoming electromobility and stationary applications. Furthermore, the Li-S battery system is potentially cheap and environmentally benign. However, the technical implementation suffers from cycling stability, low charge and discharge rates and incomplete understanding of the complex polysulfide reaction mechanism. The aim of this work is to develop an effective electrocatalyst for the polysulfide reactions so that the electrode kinetics of the sulfur half-cell will be improved. Accordingly, the overvoltage will be decreased, and the efficiency of the cell will be increased. An enhanced electroactive surface additionally improves the charge and discharge rates. To reach this goal, functionalized electrocatalytic coatings are investigated to accelerate the kinetics of the polysulfide reactions. In order to determine a suitable electrocatalyst, apparent exchange current densities of a variety of materials (Ni, Co, Pt, Cr, Al, Cu, ITO, stainless steel) have been evaluated in a polysulfide containing electrolyte by potentiodynamic measurements and a Butler-Volmer fit including diffusion limitation. The samples have been examined by Scanning Electron Microscopy (SEM) after the potentiodynamic measurements. Up to now, our work shows that cobalt is a promising material with good electrocatalytic properties for the polysulfide reactions and good chemical stability in the system. Furthermore, an electrodeposition from a modified Watt’s nickel electrolyte with a sulfur source seems to provide an autocatalytic effect, but the electrocatalytic behavior decreases after several cycles of the current-potential-curve.Keywords: electrocatalyst, energy storage, lithium sulfur battery, sulfur electrode materials
Procedia PDF Downloads 3683917 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode
Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno
Abstract:
Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity
Procedia PDF Downloads 1933916 Proposing a Boundary Coverage Algorithm for Underwater Sensor Network
Authors: Seyed Mohsen Jameii
Abstract:
Wireless underwater sensor networks are a type of sensor networks that are located in underwater environments and linked together by acoustic waves. The application of these kinds of network includes monitoring of pollutants (chemical, biological, and nuclear), oil fields detection, prediction of the likelihood of a tsunami in coastal areas, the use of wireless sensor nodes to monitor the passing submarines, and determination of appropriate locations for anchoring ships. This paper proposes a boundary coverage algorithm for intrusion detection in underwater sensor networks. In the first phase of the proposed algorithm, optimal deployment of nodes is done in the water. In the second phase, after the employment of nodes at the proper depth, clustering is executed to reduce the exchanges of messages between the sensors. In the third phase, the algorithm of "divide and conquer" is used to save energy and increase network efficiency. The simulation results demonstrate the efficiency of the proposed algorithm.Keywords: boundary coverage, clustering, divide and conquer, underwater sensor nodes
Procedia PDF Downloads 3413915 Coupling of Two Discretization Schemes for the Lattice Boltzmann Equation
Authors: Tobias Horstmann, Thomas Le Garrec, Daniel-Ciprian Mincu, Emmanuel Lévêque
Abstract:
Despite the efficiency and low dissipation of the stream-collide formulation of the Lattice Boltzmann (LB) algorithm, which is nowadays implemented in many commercial LBM solvers, there are certain situations, e.g. mesh transition, in which a classical finite-volume or finite-difference formulation of the LB algorithm still bear advantages. In this paper, we present an algorithm that combines the node-based streaming of the distribution functions with a second-order finite volume discretization of the advection term of the BGK-LB equation on a uniform D2Q9 lattice. It is shown that such a coupling is possible for a multi-domain approach as long as the overlap, or buffer zone, between two domains, is achieved on at least 2Δx. This also implies that a direct coupling (without buffer zone) of a stream-collide and finite-volume LB algorithm on a single grid is not stable. The critical parameter in the coupling is the CFL number equal to 1 that is imposed by the stream-collide algorithm. Nevertheless, an explicit filtering step on the finite-volume domain can stabilize the solution. In a further investigation, we demonstrate how such a coupling can be used for mesh transition, resulting in an intrinsic conservation of mass over the interface.Keywords: algorithm coupling, finite volume formulation, grid refinement, Lattice Boltzmann method
Procedia PDF Downloads 3783914 Model and Algorithm for Dynamic Wireless Electric Vehicle Charging Network Design
Authors: Trung Hieu Tran, Jesse O'Hanley, Russell Fowler
Abstract:
When in-wheel wireless charging technology for electric vehicles becomes mature, a need for such integrated charging stations network development is essential. In this paper, we thus investigate the optimisation problem of in-wheel wireless electric vehicle charging network design. A mixed-integer linear programming model is formulated to solve into optimality the problem. In addition, a meta-heuristic algorithm is proposed for efficiently solving large-sized instances within a reasonable computation time. A parallel computing strategy is integrated into the algorithm to speed up its computation time. Experimental results carried out on the benchmark instances show that our model and algorithm can find the optimal solutions and their potential for practical applications.Keywords: electric vehicle, wireless charging station, mathematical programming, meta-heuristic algorithm, parallel computing
Procedia PDF Downloads 793913 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir
Abstract:
NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures
Procedia PDF Downloads 2293912 Development of Algorithms for Solving and Analyzing Special Problems Transports Type
Authors: Dmitri Terzi
Abstract:
The article presents the results of an algorithmic study of a special optimization problem of the transport type (traveling salesman problem): 1) To solve the problem, a new natural algorithm has been developed based on the decomposition of the initial data into convex hulls, which has a number of advantages; it is applicable for a fairly large dimension, does not require a large amount of memory, and has fairly good performance. The relevance of the algorithm lies in the fact that, in practice, programs for problems with the number of traversal points of no more than twenty are widely used. For large-scale problems, the availability of algorithms and programs of this kind is difficult. The proposed algorithm is natural because the optimal solution found by the exact algorithm is not always feasible due to the presence of many other factors that may require some additional restrictions. 2) Another inverse problem solved here is to describe a class of traveling salesman problems that have a predetermined optimal solution. The constructed algorithm 2 allows us to characterize the structure of traveling salesman problems, as well as construct test problems to evaluate the effectiveness of algorithms and other purposes. 3) The appendix presents a software implementation of Algorithm 1 (in MATLAB), which can be used to solve practical problems, as well as in the educational process on operations research and optimization methods.Keywords: traveling salesman problem, solution construction algorithm, convex hulls, optimality verification
Procedia PDF Downloads 733911 Energy Efficient Routing Protocol with Ad Hoc On-Demand Distance Vector for MANET
Authors: K. Thamizhmaran, Akshaya Devi Arivazhagan, M. Anitha
Abstract:
On the case of most important systematic issue that must need to be solved in means of implementing a data transmission algorithm on the source of Mobile adhoc networks (MANETs). That is, how to save mobile nodes energy on meeting the requirements of applications or users as the mobile nodes are with battery limited. On while satisfying the energy saving requirement, hence it is also necessary of need to achieve the quality of service. In case of emergency work, it is necessary to deliver the data on mean time. Achieving quality of service in MANETs is also important on while. In order to achieve this requirement, Hence, we further implement the Energy-Aware routing protocol for system of Mobile adhoc networks were it being proposed, that on which saves the energy as on every node by means of efficiently selecting the mode of energy efficient path in the routing process by means of Enhanced AODV routing protocol.Keywords: Ad-Hoc networks, MANET, routing, AODV, EAODV
Procedia PDF Downloads 3693910 Sentiment Classification of Documents
Authors: Swarnadip Ghosh
Abstract:
Sentiment Analysis is the process of detecting the contextual polarity of text. In other words, it determines whether a piece of writing is positive, negative or neutral.Sentiment analysis of documents holds great importance in today's world, when numerous information is stored in databases and in the world wide web. An efficient algorithm to illicit such information, would be beneficial for social, economic as well as medical purposes. In this project, we have developed an algorithm to classify a document into positive or negative. Using our algorithm, we obtained a feature set from the data, and classified the documents based on this feature set. It is important to note that, in the classification, we have not used the independence assumption, which is considered by many procedures like the Naive Bayes. This makes the algorithm more general in scope. Moreover, because of the sparsity and high dimensionality of such data, we did not use empirical distribution for estimation, but developed a method by finding degree of close clustering of the data points. We have applied our algorithm on a movie review data set obtained from IMDb and obtained satisfactory results.Keywords: sentiment, Run's Test, cross validation, higher dimensional pmf estimation
Procedia PDF Downloads 4023909 Satellite Image Classification Using Firefly Algorithm
Authors: Paramjit Kaur, Harish Kundra
Abstract:
In the recent years, swarm intelligence based firefly algorithm has become a great focus for the researchers to solve the real time optimization problems. Here, firefly algorithm is used for the application of satellite image classification. For experimentation, Alwar area is considered to multiple land features like vegetation, barren, hilly, residential and water surface. Alwar dataset is considered with seven band satellite images. Firefly Algorithm is based on the attraction of less bright fireflies towards more brightener one. For the evaluation of proposed concept accuracy assessment parameters are calculated using error matrix. With the help of Error matrix, parameters of Kappa Coefficient, Overall Accuracy and feature wise accuracy parameters of user’s accuracy & producer’s accuracy can be calculated. Overall results are compared with BBO, PSO, Hybrid FPAB/BBO, Hybrid ACO/SOFM and Hybrid ACO/BBO based on the kappa coefficient and overall accuracy parameters.Keywords: image classification, firefly algorithm, satellite image classification, terrain classification
Procedia PDF Downloads 4003908 Safety Testing of Commercial Lithium-Ion Batteries and Failure Modes Analysis
Authors: Romeo Malik, Yashraj Tripathy, Anup Barai
Abstract:
Transportation safety is a major concern for vehicle electrification on a large-scale. The failure cost of lithium-ion batteries is substantial and is significantly impacted by higher liability and replacement cost. With continuous advancement on the material front in terms of higher energy density, upgrading safety characteristics are becoming more crucial for broader integration of lithium-ion batteries. Understanding and impeding thermal runaway is the prime issue for battery safety researchers. In this study, a comprehensive comparison of thermal runaway mechanisms for two different cathode types, Li(Ni₀.₃Co₀.₃Mn₀.₃)O₂ and Li(Ni₀.₈Co₀.₁₅Al₀.₀₅)O₂ is explored. Both the chemistries were studied for different states of charge, and the various abuse scenarios that lead to thermal runaway is investigated. Abuse tests include mechanical abuse, electrical abuse, and thermal abuse. Batteries undergo thermal runaway due to a series of combustible reactions taking place internally; this is observed as multiple jets of flame reaching temperatures of the order of 1000ºC. The physicochemical characterisation was performed on cells, prior to and after abuse. Battery’s state of charge and chemistry have a significant effect on the flame temperature profiles which is otherwise quantified as heat released. Majority of the failures during transportation is due to these external short circuit. Finally, a mitigation approach is proposed to impede the thermal runaway hazard. Transporting lithium-ion batteries under low states of charge is proposed as a way forward. Batteries at low states of charge have demonstrated minimal heat release under thermal runaway reducing the risk of secondary hazards such as thermal runaway propagation.Keywords: battery reliability, lithium-ion batteries, thermal runaway characterisation, tomography
Procedia PDF Downloads 1223907 Adaptive Multipath Mitigation Acquisition Approach for Global Positioning System Software Receivers
Authors: Animut Meseret Simachew
Abstract:
Parallel Code Phase Search Acquisition (PCSA) Algorithm has been considered as a promising method in GPS software receivers for detection and estimation of the accurate correlation peak between the received Global Positioning System (GPS) signal and locally generated replicas. GPS signal acquisition in highly dense multipath environments is the main research challenge. In this work, we proposed a robust variable step-size (RVSS) PCSA algorithm based on fast frequency transform (FFT) filtering technique to mitigate short time delay multipath signals. Simulation results reveal the effectiveness of the proposed algorithm over the conventional PCSA algorithm. The proposed RVSS-PCSA algorithm equalizes the received carrier wiped-off signal with locally generated C/A code.Keywords: adaptive PCSA, detection and estimation, GPS signal acquisition, GPS software receiver
Procedia PDF Downloads 1173906 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm
Authors: Anuradha Chug, Sunali Gandhi
Abstract:
Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm
Procedia PDF Downloads 3803905 A Novel Approach of Secret Communication Using Douglas-Peucker Algorithm
Authors: R. Kiruthika, A. Kannan
Abstract:
Steganography is the problem of hiding secret messages in 'innocent – looking' public communication so that the presence of the secret message cannot be detected. This paper introduces a steganographic security in terms of computational in-distinguishability from a channel of probability distributions on cover messages. This method first splits the cover image into two separate blocks using Douglas – Peucker algorithm. The text message and the image will be hided in the Least Significant Bit (LSB) of the cover image.Keywords: steganography, lsb, embedding, Douglas-Peucker algorithm
Procedia PDF Downloads 3633904 A Novel Probablistic Strategy for Modeling Photovoltaic Based Distributed Generators
Authors: Engy A. Mohamed, Y. G. Hegazy
Abstract:
This paper presents a novel algorithm for modeling photovoltaic based distributed generators for the purpose of optimal planning of distribution networks. The proposed algorithm utilizes sequential Monte Carlo method in order to accurately consider the stochastic nature of photovoltaic based distributed generators. The proposed algorithm is implemented in MATLAB environment and the results obtained are presented and discussed.Keywords: comulative distribution function, distributed generation, Monte Carlo
Procedia PDF Downloads 5833903 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 4163902 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: lidar, segmentation, clustering, tracking
Procedia PDF Downloads 4233901 Vision Based People Tracking System
Authors: Boukerch Haroun, Luo Qing Sheng, Li Hua Shi, Boukraa Sebti
Abstract:
In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion.Keywords: camshift algorithm, computer vision, Kalman filter, object tracking
Procedia PDF Downloads 4463900 Pathway to Sustainable Shipping: Electric Ships
Authors: Wei Wang, Yannick Liu, Lu Zhen, H. Wang
Abstract:
Maritime transport plays an important role in global economic development but also inevitably faces increasing pressures from all sides, such as ship operating cost reduction and environmental protection. An ideal innovation to address these pressures is electric ships. The electric ship is in the early stage. Considering the special characteristics of electric ships, i.e., travel range limit, to guarantee the efficient operation of electric ships, the service network needs to be re-designed carefully. This research designs a cost-efficient and environmentally friendly service network for electric ships, including the location of charging stations, charging plan, route planning, ship scheduling, and ship deployment. The problem is formulated as a mixed-integer linear programming model with the objective of minimizing total cost comprised of charging cost, the construction cost of charging stations, and fixed cost of ships. A case study using data of the shipping network along the Yangtze River is conducted to evaluate the performance of the model. Two operating scenarios are used: an electric ship scenario where all the transportation tasks are fulfilled by electric ships and a conventional ship scenario where all the transportation tasks are fulfilled by fuel oil ships. Results unveil that the total cost of using electric ships is only 42.8% of using conventional ships. Using electric ships can reduce 80% SOx, 93.47% NOx, 89.47% PM, and 42.62% CO2, but will consume 2.78% more time to fulfill all the transportation tasks. Extensive sensitivity analyses are also conducted for key operating factors, including battery capacity, charging speed, volume capacity, and a service time limit of transportation task. Implications from the results are as follows: 1) it is necessary to equip the ship with a large capacity battery when the number of charging stations is low; 2) battery capacity will influence the number of ships deployed on each route; 3) increasing battery capacity will make the electric ship more cost-effective; 4) charging speed does not affect charging amount and location of charging station, but will influence the schedule of ships on each route; 5) there exists an optimal volume capacity, at which all costs and total delivery time are lowest; 6) service time limit will influence ship schedule and ship cost.Keywords: cost reduction, electric ship, environmental protection, sustainable shipping
Procedia PDF Downloads 773899 Sub-Pixel Mapping Based on New Mixed Interpolation
Authors: Zeyu Zhou, Xiaojun Bi
Abstract:
Due to the limited environmental parameters and the limited resolution of the sensor, the universal existence of the mixed pixels in the process of remote sensing images restricts the spatial resolution of the remote sensing images. Sub-pixel mapping technology can effectively improve the spatial resolution. As the bilinear interpolation algorithm inevitably produces the edge blur effect, which leads to the inaccurate sub-pixel mapping results. In order to avoid the edge blur effect that affects the sub-pixel mapping results in the interpolation process, this paper presents a new edge-directed interpolation algorithm which uses the covariance adaptive interpolation algorithm on the edge of the low-resolution image and uses bilinear interpolation algorithm in the low-resolution image smooth area. By using the edge-directed interpolation algorithm, the super-resolution of the image with low resolution is obtained, and we get the percentage of each sub-pixel under a certain type of high-resolution image. Then we rely on the probability value as a soft attribute estimate and carry out sub-pixel scale under the ‘hard classification’. Finally, we get the result of sub-pixel mapping. Through the experiment, we compare the algorithm and the bilinear algorithm given in this paper to the results of the sub-pixel mapping method. It is found that the sub-pixel mapping method based on the edge-directed interpolation algorithm has better edge effect and higher mapping accuracy. The results of the paper meet our original intention of the question. At the same time, the method does not require iterative computation and training of samples, making it easier to implement.Keywords: remote sensing images, sub-pixel mapping, bilinear interpolation, edge-directed interpolation
Procedia PDF Downloads 2293898 Efficacy of Learning: Digital Sources versus Print
Authors: Rahimah Akbar, Abdullah Al-Hashemi, Hanan Taqi, Taiba Sadeq
Abstract:
As technology continues to develop, teaching curriculums in both schools and universities have begun adopting a more computer/digital based approach to the transmission of knowledge and information, as opposed to the more old-fashioned use of textbooks. This gives rise to the question: Are there any differences in learning from a digital source over learning from a printed source, as in from a textbook? More specifically, which medium of information results in better long-term retention? A review of the confounding factors implicated in understanding the relationship between learning from the two different mediums was done. Alongside this, a 4-week cohort study involving 76 1st year English Language female students was performed, whereby the participants were divided into 2 groups. Group A studied material from a paper source (referred to as the Print Medium), and Group B studied material from a digital source (Digital Medium). The dependent variables were grading of memory recall indexed by a 4 point grading system, and total frequency of item repetition. The study was facilitated by advanced computer software called Super Memo. Results showed that, contrary to prevailing evidence, the Digital Medium group showed no statistically significant differences in terms of the shift from Remember (Episodic) to Know (Semantic) when all confounding factors were accounted for. The shift from Random Guess and Familiar to Remember occurred faster in the Digital Medium than it did in the Print Medium.Keywords: digital medium, print medium, long-term memory recall, episodic memory, semantic memory, super memo, forgetting index, frequency of repetitions, total time spent
Procedia PDF Downloads 2893897 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 2333896 Design an Algorithm for Software Development in CBSE Envrionment Using Feed Forward Neural Network
Authors: Amit Verma, Pardeep Kaur
Abstract:
In software development organizations, Component based Software engineering (CBSE) is emerging paradigm for software development and gained wide acceptance as it often results in increase quality of software product within development time and budget. In component reusability, main challenges are the right component identification from large repositories at right time. The major objective of this work is to provide efficient algorithm for storage and effective retrieval of components using neural network and parameters based on user choice through clustering. This research paper aims to propose an algorithm that provides error free and automatic process (for retrieval of the components) while reuse of the component. In this algorithm, keywords (or components) are extracted from software document, after by applying k mean clustering algorithm. Then weights assigned to those keywords based on their frequency and after assigning weights, ANN predicts whether correct weight is assigned to keywords (or components) or not, otherwise it back propagates in to initial step (re-assign the weights). In last, store those all keywords into repositories for effective retrieval. Proposed algorithm is very effective in the error correction and detection with user base choice while choice of component for reusability for efficient retrieval is there.Keywords: component based development, clustering, back propagation algorithm, keyword based retrieval
Procedia PDF Downloads 378