Search results for: Hash algorithm.
475 Maximizer of the Posterior Marginal Estimate for Noise Reduction of JPEG-compressed Image
Authors: Yohei Saika, Yuji Haraguchi
Abstract:
We constructed a method of noise reduction for JPEG-compressed image based on Bayesian inference using the maximizer of the posterior marginal (MPM) estimate. In this method, we tried the MPM estimate using two kinds of likelihood, both of which enhance grayscale images converted into the JPEG-compressed image through the lossy JPEG image compression. One is the deterministic model of the likelihood and the other is the probabilistic one expressed by the Gaussian distribution. Then, using the Monte Carlo simulation for grayscale images, such as the 256-grayscale standard image “Lena" with 256 × 256 pixels, we examined the performance of the MPM estimate based on the performance measure using the mean square error. We clarified that the MPM estimate via the Gaussian probabilistic model of the likelihood is effective for reducing noises, such as the blocking artifacts and the mosquito noise, if we set parameters appropriately. On the other hand, we found that the MPM estimate via the deterministic model of the likelihood is not effective for noise reduction due to the low acceptance ratio of the Metropolis algorithm.Keywords: Noise reduction, JPEG-compressed image, Bayesian inference, the maximizer of the posterior marginal estimate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988474 Anomaly Detection using Neuro Fuzzy system
Authors: Fatemeh Amiri, Caro Lucas, Nasser Yazdani
Abstract:
As the network based technologies become omnipresent, demands to secure networks/systems against threat increase. One of the effective ways to achieve higher security is through the use of intrusion detection systems (IDS), which are a software tool to detect anomalous in the computer or network. In this paper, an IDS has been developed using an improved machine learning based algorithm, Locally Linear Neuro Fuzzy Model (LLNF) for classification whereas this model is originally used for system identification. A key technical challenge in IDS and LLNF learning is the curse of high dimensionality. Therefore a feature selection phase is proposed which is applicable to any IDS. While investigating the use of three feature selection algorithms, in this model, it is shown that adding feature selection phase reduces computational complexity of our model. Feature selection algorithms require the use of a feature goodness measure. The use of both a linear and a non-linear measure - linear correlation coefficient and mutual information- is investigated respectivelyKeywords: anomaly Detection, feature selection, Locally Linear Neuro Fuzzy (LLNF), Mutual Information (MI), liner correlation coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187473 Ontology-based Concept Weighting for Text Documents
Authors: Hmway Hmway Tar, Thi Thi Soe Nyaunt
Abstract:
Documents clustering become an essential technology with the popularity of the Internet. That also means that fast and high-quality document clustering technique play core topics. Text clustering or shortly clustering is about discovering semantically related groups in an unstructured collection of documents. Clustering has been very popular for a long time because it provides unique ways of digesting and generalizing large amounts of information. One of the issues of clustering is to extract proper feature (concept) of a problem domain. The existing clustering technology mainly focuses on term weight calculation. To achieve more accurate document clustering, more informative features including concept weight are important. Feature Selection is important for clustering process because some of the irrelevant or redundant feature may misguide the clustering results. To counteract this issue, the proposed system presents the concept weight for text clustering system developed based on a k-means algorithm in accordance with the principles of ontology so that the important of words of a cluster can be identified by the weight values. To a certain extent, it has resolved the semantic problem in specific areas.Keywords: Clustering, Concept Weight, Document clustering, Feature Selection, Ontology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2407472 State Estimation Solution with Optimal Allocation of Phasor Measurement Units Considering Zero Injection Bus Modeling
Authors: M. Ravindra, R. Srinivasa Rao, V. Shanmukha Naga Raju
Abstract:
This paper presents state estimation with Phasor Measurement Unit (PMU) allocation to obtain complete observability of network. A matrix is designed with modeling of zero injection constraints to minimize PMU allocations. State estimation algorithm is developed with optimal allocation of PMUs to find accurate states of network. The incorporation of PMU into traditional state estimation process improves accuracy and computational performance for large power systems. The nonlinearity integrated with zero injection (ZI) constraints is remodeled to linear frame to optimize number of PMUs. The problem of optimal PMU allocation is regarded with modeling of ZI constraints, PMU loss or line outage, cost factor and redundant measurements. The proposed state estimation with optimal PMU allocation has been compared with traditional state estimation process to show its importance. MATLAB programming on IEEE 14, 30, 57, and 118 bus networks is implemented out by Binary Integer Programming (BIP) method and compared with other methods to show its effectiveness.
Keywords: Observability, phasor measurement units, synchrophasors, SCADA measurements, zero injection bus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 807471 Detecting and Locating Wormhole Attacks in Wireless Sensor Networks Using Beacon Nodes
Authors: He Ronghui, Ma Guoqing, Wang Chunlei, Fang Lan
Abstract:
This paper focuses on wormhole attacks detection in wireless sensor networks. The wormhole attack is particularly challenging to deal with since the adversary does not need to compromise any nodes and can use laptops or other wireless devices to send the packets on a low latency channel. This paper introduces an easy and effective method to detect and locate the wormholes: Since beacon nodes are assumed to know their coordinates, the straight line distance between each pair of them can be calculated and then compared with the corresponding hop distance, which in this paper equals hop counts × node-s transmission range R. Dramatic difference may emerge because of an existing wormhole. Our detection mechanism is based on this. The approximate location of the wormhole can also be derived in further steps based on this information. To the best of our knowledge, our method is much easier than other wormhole detecting schemes which also use beacon nodes, and to those have special requirements on each nodes (e.g., GPS receivers or tightly synchronized clocks or directional antennas), ours is more economical. Simulation results show that the algorithm is successful in detecting and locating wormholes when the density of beacon nodes reaches 0.008 per m2.
Keywords: Beacon node, wireless sensor network, worm hole attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881470 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error
Authors: Qianhua He, Weili Zhou, Aiwu Chen
Abstract:
A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.
Keywords: Speech denoising, sparse representation, K-singular value decomposition, orthogonal matching pursuit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014469 Segmentation of Lungs from CT Scan Images for Early Diagnosis of Lung Cancer
Authors: Nisar Ahmed Memon, Anwar Majid Mirza, S.A.M. Gilani
Abstract:
Segmentation is an important step in medical image analysis and classification for radiological evaluation or computer aided diagnosis. The CAD (Computer Aided Diagnosis ) of lung CT generally first segment the area of interest (lung) and then analyze the separately obtained area for nodule detection in order to diagnosis the disease. For normal lung, segmentation can be performed by making use of excellent contrast between air and surrounding tissues. However this approach fails when lung is affected by high density pathology. Dense pathologies are present in approximately a fifth of clinical scans, and for computer analysis such as detection and quantification of abnormal areas it is vital that the entire and perfectly lung part of the image is provided and no part, as present in the original image be eradicated. In this paper we have proposed a lung segmentation technique which accurately segment the lung parenchyma from lung CT Scan images. The algorithm was tested against the 25 datasets of different patients received from Ackron Univeristy, USA and AGA Khan Medical University, Karachi, Pakistan.Keywords: Computer Aided Diagnosis, Medical ImageProcessing, Region Growing, Segmentation, Thresholding,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2601468 Specialized Reduced Models of Dynamic Flows in 2-Stroke Engines
Authors: S. Cagin, X. Fischer, E. Delacourt, N. Bourabaa, C. Morin, D. Coutellier, B. Carré, S. Loumé
Abstract:
The complexity of scavenging by ports and its impact on engine efficiency create the need to understand and to model it as realistically as possible. However, there are few empirical scavenging models and these are highly specialized. In a design optimization process, they appear very restricted and their field of use is limited. This paper presents a comparison of two methods to establish and reduce a model of the scavenging process in 2-stroke diesel engines. To solve the lack of scavenging models, a CFD model has been developed and is used as the referent case. However, its large size requires a reduction. Two techniques have been tested depending on their fields of application: The NTF method and neural networks. They both appear highly appropriate drastically reducing the model’s size (over 90% reduction) with a low relative error rate (under 10%). Furthermore, each method produces a reduced model which can be used in distinct specialized fields of application: the distribution of a quantity (mass fraction for example) in the cylinder at each time step (pseudo-dynamic model) or the qualification of scavenging at the end of the process (pseudo-static model).
Keywords: Diesel engine, Design optimization, Model reduction, Neural network, NTF algorithm, Scavenging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1331467 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks
Authors: Siliang Wang, Minghui Wang, Jun Hu
Abstract:
A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.Keywords: pruning method, stochastic, time-varying networks, optimal path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1854466 Improvement of Central Composite Design in Modeling and Optimization of Simulation Experiments
Authors: A. Nuchitprasittichai, N. Lerdritsirikoon, T. Khamsing
Abstract:
Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.Keywords: Central composite design, CO2 liquefaction, Latin Hypercube Sampling, simulation – based optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743465 Numerical Study of Oxygen Enrichment on NO Pollution Spread in a Combustion Chamber
Authors: Zohreh Orshesh
Abstract:
In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aim to obtain detailed information on combustion characteristics and _ nitrogen oxides in the furnace and the effect of oxygen enrichment in a combustion process. Oxygenenriched combustion is an effective way to reduce emissions. This paper analyzes NO emission, including thermal NO and prompt NO. Flow rate ratio of air to fuel is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.32 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Results show that for AF=1.3, increase the oxygen flow rate of oxygen reduction in NO emissions is Lance. Moreover, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak, but not the NO emission rate. As a result, oxygen enrichment can reduce the NO emission at this kind of furnace in low air to fuel rates.Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, NO emission
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638464 Using the Simple Fixed Rate Approach to Solve Economic Lot Scheduling Problem under the Basic Period Approach
Authors: Yu-Jen Chang, Yun Chen, Hei-Lam Wong
Abstract:
The Economic Lot Scheduling Problem (ELSP) is a valuable mathematical model that can support decision-makers to make scheduling decisions. The basic period approach is effective for solving the ELSP. The assumption for applying the basic period approach is that a product must use its maximum production rate to be produced. However, a product can lower its production rate to reduce the average total cost when a facility has extra idle time. The past researches discussed how a product adjusts its production rate under the common cycle approach. To the best of our knowledge, no studies have addressed how a product lowers its production rate under the basic period approach. This research is the first paper to discuss this topic. The research develops a simple fixed rate approach that adjusts the production rate of a product under the basic period approach to solve the ELSP. Our numerical example shows our approach can find a better solution than the traditional basic period approach. Our mathematical model that applies the fixed rate approach under the basic period approach can serve as a reference for other related researches.Keywords: Economic Lot, Basic Period, Genetic Algorithm, Fixed Rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940463 Dynamic Anonymity
Authors: Emin Islam Tatlı, Dirk Stegemann, Stefan Lucks
Abstract:
Encryption protects communication partners from disclosure of their secret messages but cannot prevent traffic analysis and the leakage of information about “who communicates with whom". In the presence of collaborating adversaries, this linkability of actions can danger anonymity. However, reliably providing anonymity is crucial in many applications. Especially in contextaware mobile business, where mobile users equipped with PDAs request and receive services from service providers, providing anonymous communication is mission-critical and challenging at the same time. Firstly, the limited performance of mobile devices does not allow for heavy use of expensive public-key operations which are commonly used in anonymity protocols. Moreover, the demands for security depend on the application (e.g., mobile dating vs. pizza delivery service), but different users (e.g., a celebrity vs. a normal person) may even require different security levels for the same application. Considering both hardware limitations of mobile devices and different sensitivity of users, we propose an anonymity framework that is dynamically configurable according to user and application preferences. Our framework is based on Chaum-s mixnet. We explain the proposed framework, its configuration parameters for the dynamic behavior and the algorithm to enforce dynamic anonymity.Keywords: Anonymity, context-awareness, mix-net, mobile business, policy management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710462 Faster FPGA Routing Solution using DNA Computing
Authors: Manpreet Singh, Parvinder Singh Sandhu, Manjinder Singh Kahlon
Abstract:
There are many classical algorithms for finding routing in FPGA. But Using DNA computing we can solve the routes efficiently and fast. The run time complexity of DNA algorithms is much less than other classical algorithms which are used for solving routing in FPGA. The research in DNA computing is in a primary level. High information density of DNA molecules and massive parallelism involved in the DNA reactions make DNA computing a powerful tool. It has been proved by many research accomplishments that any procedure that can be programmed in a silicon computer can be realized as a DNA computing procedure. In this paper we have proposed two tier approaches for the FPGA routing solution. First, geometric FPGA detailed routing task is solved by transforming it into a Boolean satisfiability equation with the property that any assignment of input variables that satisfies the equation specifies a valid routing. Satisfying assignment for particular route will result in a valid routing and absence of a satisfying assignment implies that the layout is un-routable. In second step, DNA search algorithm is applied on this Boolean equation for solving routing alternatives utilizing the properties of DNA computation. The simulated results are satisfactory and give the indication of applicability of DNA computing for solving the FPGA Routing problem.Keywords: FPGA, Routing, DNA Computing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1593461 Splitting Modified Donor-Cell Schemes for Spectral Action Balance Equation
Authors: Tanapat Brikshavana, Anirut Luadsong
Abstract:
The spectral action balance equation is an equation that used to simulate short-crested wind-generated waves in shallow water areas such as coastal regions and inland waters. This equation consists of two spatial dimensions, wave direction, and wave frequency which can be solved by finite difference method. When this equation with dominating propagation velocity terms are discretized using central differences, stability problems occur when the grid spacing is chosen too coarse. In this paper, we introduce the splitting modified donorcell scheme for avoiding stability problems and prove that it is consistent to the modified donor-cell scheme with same accuracy. The splitting modified donor-cell scheme was adopted to split the wave spectral action balance equation into four one-dimensional problems, which for each small problem obtains the independently tridiagonal linear systems. For each smaller system can be solved by direct or iterative methods at the same time which is very fast when performed by a multi-cores computer.Keywords: donor-cell scheme, parallel algorithm, spectral action balance equation, splitting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489460 A Numerical Description of a Fibre Reinforced Concrete Using a Genetic Algorithm
Authors: Henrik L. Funke, Lars Ulke-Winter, Sandra Gelbrich, Lothar Kroll
Abstract:
This work reports about an approach for an automatic adaptation of concrete formulations based on genetic algorithms (GA) to optimize a wide range of different fit-functions. In order to achieve the goal, a method was developed which provides a numerical description of a fibre reinforced concrete (FRC) mixture regarding the production technology and the property spectrum of the concrete. In a first step, the FRC mixture with seven fixed components was characterized by varying amounts of the components. For that purpose, ten concrete mixtures were prepared and tested. The testing procedure comprised flow spread, compressive and bending tensile strength. The analysis and approximation of the determined data was carried out by GAs. The aim was to obtain a closed mathematical expression which best describes the given seven-point cloud of FRC by applying a Gene Expression Programming with Free Coefficients (GEP-FC) strategy. The seven-parametric FRC-mixtures model which is generated according to this method correlated well with the measured data. The developed procedure can be used for concrete mixtures finding closed mathematical expressions, which are based on the measured data.
Keywords: Concrete design, fibre reinforced concrete, genetic algorithms, GEP-FC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 991459 Fuzzy Logic Based Maximum Power Point Tracking Designed for 10kW Solar Photovoltaic System with Different Membership Functions
Authors: S. Karthika, K. Velayutham, P. Rathika, D. Devaraj
Abstract:
The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.
Keywords: MPPT, DC-DC Converter, Fuzzy logic controller, Photovoltaic (PV) system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4260458 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant
Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan
Abstract:
The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.
Keywords: Soft jar test, jar test, water treatment plant process, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667457 An Improved Particle Swarm Optimization Technique for Combined Economic and Environmental Power Dispatch Including Valve Point Loading Effects
Authors: Badr M. Alshammari, T. Guesmi
Abstract:
In recent years, the combined economic and emission power dispatch is one of the main problems of electrical power system. It aims to schedule the power generation of generators in order to minimize cost production and emission of harmful gases caused by fossil-fueled thermal units such as CO, CO2, NOx, and SO2. To solve this complicated multi-objective problem, an improved version of the particle swarm optimization technique that includes non-dominated sorting concept has been proposed. Valve point loading effects and system losses have been considered. The three-unit and ten-unit benchmark systems have been used to show the effectiveness of the suggested optimization technique for solving this kind of nonconvex problem. The simulation results have been compared with those obtained using genetic algorithm based method. Comparison results show that the proposed approach can provide a higher quality solution with better performance.
Keywords: Power dispatch, valve point loading effects, multiobjective optimization, Pareto solutions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770456 Lattice Boltzmann Simulation of Binary Mixture Diffusion Using Modern Graphics Processors
Authors: Mohammad Amin Safi, Mahmud Ashrafizaadeh, Amir Ali Ashrafizaadeh
Abstract:
A highly optimized implementation of binary mixture diffusion with no initial bulk velocity on graphics processors is presented. The lattice Boltzmann model is employed for simulating the binary diffusion of oxygen and nitrogen into each other with different initial concentration distributions. Simulations have been performed using the latest proposed lattice Boltzmann model that satisfies both the indifferentiability principle and the H-theorem for multi-component gas mixtures. Contemporary numerical optimization techniques such as memory alignment and increasing the multiprocessor occupancy are exploited along with some novel optimization strategies to enhance the computational performance on graphics processors using the C for CUDA programming language. Speedup of more than two orders of magnitude over single-core processors is achieved on a variety of Graphical Processing Unit (GPU) devices ranging from conventional graphics cards to advanced, high-end GPUs, while the numerical results are in excellent agreement with the available analytical and numerical data in the literature.Keywords: Lattice Boltzmann model, Graphical processing unit, Binary mixture diffusion, 2D flow simulations, Optimized algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557455 A Hybrid Metaheuristic Framework for Evolving the PROAFTN Classifier
Authors: Feras Al-Obeidat, Nabil Belacel, Juan A. Carretero, Prabhat Mahanti,
Abstract:
In this paper, a new learning algorithm based on a hybrid metaheuristic integrating Differential Evolution (DE) and Reduced Variable Neighborhood Search (RVNS) is introduced to train the classification method PROAFTN. To apply PROAFTN, values of several parameters need to be determined prior to classification. These parameters include boundaries of intervals and relative weights for each attribute. Based on these requirements, the hybrid approach, named DEPRO-RVNS, is presented in this study. In some cases, the major problem when applying DE to some classification problems was the premature convergence of some individuals to local optima. To eliminate this shortcoming and to improve the exploration and exploitation capabilities of DE, such individuals were set to iteratively re-explored using RVNS. Based on the generated results on both training and testing data, it is shown that the performance of PROAFTN is significantly improved. Furthermore, the experimental study shows that DEPRO-RVNS outperforms well-known machine learning classifiers in a variety of problems.Keywords: Knowledge Discovery, Differential Evolution, Reduced Variable Neighborhood Search, Multiple criteria classification, PROAFTN, Supervised Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478454 Level Shifted Carrier Signal Based Scalar Random Pulse Width Modulation Algorithms for Cascaded Multilevel Inverter Fed Induction Motor Drive
Authors: M. Nayeemuddin, T. Bramhananda Reddy, M. Vijaya Kumar
Abstract:
Acoustic noise becoming ever more obnoxious radiated by voltage source inverter fed induction motor drive in modern and industrial applications. The drive utilized for industrial and modern applications should use “spread spectrum” innovation known as Random pulse width modulation (PWM) algorithms where acoustic noise emanates through the machine should be critically concerned. This paper illustrates three types of random PWM control algorithms with fixed switching frequency namely 1) Random modulating PWM 2) Random carrier PWM and 3) Random modulating-carrier PWM. The spectrum plots of the motor stator current demonstrate the strength and robustness of the proposed PWM algorithms. To affirm the proposed algorithms, experimental tests have been conducted using dSPACE rt1104 control board on a v/f control three phase induction motor drive fed by DC link cascaded multilevel inverter.
Keywords: Multilevel inverter, acoustic noise, CSVPWM, total harmonic distortion, random PWM algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664453 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory
Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan
Abstract:
Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.Keywords: Data fusion, Dempster-Shafer theory, data mining, event detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1800452 Multi-Criteria Optimization of High-Temperature Reversed Starter-Generator
Authors: Flur R. Ismagilov, Irek Kh. Khayrullin, Vyacheslav E. Vavilov, Ruslan D. Karimov, Anton S. Gorbunov, Danis R. Farrakhov
Abstract:
The paper presents another structural scheme of high-temperature starter-generator with external rotor to be installed on High Pressure Shaft (HPS) of aircraft engines (AE) to implement More Electrical Engine concept. The basic materials to make this starter-generator (SG) were selected and justified. Multi-criteria optimization of the developed structural scheme was performed using a genetic algorithm and Pareto method. The optimum (in Pareto terms) active length and thickness of permanent magnets of SG were selected as a result of the optimization. Using the dimensions obtained, allowed to reduce the weight of the designed SG by 10 kg relative to a base option at constant thermal loads. Multidisciplinary computer simulation was performed on the basis of the optimum geometric dimensions, which proved performance efficiency of the design. We further plan to make a full-scale sample of SG of HPS and publish the results of its experimental research.Keywords: High-temperature starter-generator, More electrical engine, multi-criteria optimization, permanent magnet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214451 Low Power and Less Area Architecture for Integer Motion Estimation
Authors: C Hisham, K Komal, Amit K Mishra
Abstract:
Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Keywords: Sum of absolute difference, high speed DSP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493450 A Differential Calculus Based Image Steganography with Crossover
Authors: Srilekha Mukherjee, Subha Ash, Goutam Sanyal
Abstract:
Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.Keywords: Steganography, Crossover, Differential Calculus, Peak Signal to Noise Ratio, Cross-correlation Coefficient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399449 Improving Cryptographically Generated Address Algorithm in IPv6 Secure Neighbor Discovery Protocol through Trust Management
Authors: M. Moslehpour, S. Khorsandi
Abstract:
As transition to widespread use of IPv6 addresses has gained momentum, it has been shown to be vulnerable to certain security attacks such as those targeting Neighbor Discovery Protocol (NDP) which provides the address resolution functionality in IPv6. To protect this protocol, Secure Neighbor Discovery (SEND) is introduced. This protocol uses Cryptographically Generated Address (CGA) and asymmetric cryptography as a defense against threats on integrity and identity of NDP. Although SEND protects NDP against attacks, it is computationally intensive due to Hash2 condition in CGA. To improve the CGA computation speed, we parallelized CGA generation process and used the available resources in a trusted network. Furthermore, we focused on the influence of the existence of malicious nodes on the overall load of un-malicious ones in the network. According to the evaluation results, malicious nodes have adverse impacts on the average CGA generation time and on the average number of tries. We utilized a Trust Management that is capable of detecting and isolating the malicious node to remove possible incentives for malicious behavior. We have demonstrated the effectiveness of the Trust Management System in detecting the malicious nodes and hence improving the overall system performance.
Keywords: NDP, SEND, CGA, modifier, malicious node.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207448 A Nano-Scaled SRAM Guard Band Design with Gaussian Mixtures Model of Complex Long Tail RTN Distributions
Authors: Worawit Somha, Hiroyuki Yamauchi
Abstract:
This paper proposes, for the first time, how the challenges facing the guard-band designs including the margin assist-circuits scheme for the screening-test in the coming process generations should be addressed. The increased screening error impacts are discussed based on the proposed statistical analysis models. It has been shown that the yield-loss caused by the misjudgment on the screening test would become 5-orders of magnitude larger than that for the conventional one when the amplitude of random telegraph noise (RTN) caused variations approaches to that of random dopant fluctuation. Three fitting methods to approximate the RTN caused complex Gamma mixtures distributions by the simple Gaussian mixtures model (GMM) are proposed and compared. It has been verified that the proposed methods can reduce the error of the fail-bit predictions by 4-orders of magnitude.Keywords: Mixtures of Gaussian, Random telegraph noise, EM algorithm, Long-tail distribution, Fail-bit analysis, Static random access memory, Guard band design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1844447 Modeling and Dynamics Analysis for Intelligent Skid-Steering Vehicle Based on Trucksim-Simulink
Authors: Yansong Zhang, Xueyuan Li, Junjie Zhou, Xufeng Yin, Shihua Yuan, Shuxian Liu
Abstract:
Aiming at the verification of control algorithms for skid-steering vehicles, a vehicle simulation model of 6×6 electric skid-steering unmanned vehicle was established based on Trucksim and Simulink. The original transmission and steering mechanism of Trucksim are removed, and the electric skid-steering model and a closed-loop controller for the vehicle speed and yaw rate are built in Simulink. The simulation results are compared with the ones got by theoretical formulas. The results show that the predicted tire mechanics and vehicle kinematics of Trucksim-Simulink simulation model are closed to the theoretical results. Therefore, it can be used as an effective approach to study the dynamic performance and control algorithm of skid-steering vehicle. In this paper, a method of motion control based on feed forward control is also designed. The simulation results show that the feed forward control strategy can make the vehicle follow the target yaw rate more quickly and accurately, which makes the vehicle have more maneuverability.
Keywords: Skid-steering, Trucksim-Simulink, feedforward control, dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 949446 Advanced Geolocation of IP Addresses
Authors: Robert Koch, Mario Golling, Gabi Dreo Rodosek
Abstract:
Tracing and locating the geographical location of users (Geolocation) is used extensively in todays Internet. Whenever we, e.g., request a page from google we are - unless there was a specific configuration made - automatically forwarded to the page with the relevant language and amongst others, dependent on our location identified, specific commercials are presented. Especially within the area of Network Security, Geolocation has a significant impact. Because of the way the Internet works, attacks can be executed from almost everywhere. Therefore, for an attribution, knowledge of the origination of an attack - and thus Geolocation - is mandatory in order to be able to trace back an attacker. In addition, Geolocation can also be used very successfully to increase the security of a network during operation (i.e. before an intrusion actually has taken place). Similar to greylisting in emails, Geolocation allows to (i) correlate attacks detected with new connections and (ii) as a consequence to classify traffic a priori as more suspicious (thus particularly allowing to inspect this traffic in more detail). Although numerous techniques for Geolocation are existing, each strategy is subject to certain restrictions. Following the ideas of Endo et al., this publication tries to overcome these shortcomings with a combined solution of different methods to allow improved and optimized Geolocation. Thus, we present our architecture for improved Geolocation, by designing a new algorithm, which combines several Geolocation techniques to increase the accuracy.
Keywords: IP geolocation, prosecution of computer fraud, attack attribution, target-analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4727