Search results for: Noise Analysis
8983 Analysis of Lightweight Register Hardware Threat
Authors: Yang Luo, Beibei Wang
Abstract:
In this paper, we present a design methodology of lightweight register transfer level (RTL) hardware threat implemented based on a MAX II FPGA platform. The dynamic power consumed by the toggling of the various bit of registers as well as the dynamic power consumed per unit of logic circuits were analyzed. The hardware threat was designed taking advantage of the differences in dynamic power consumed per unit of logic circuits to hide the transfer information. The experiment result shows that the register hardware threat was successfully implemented by using different dynamic power consumed per unit of logic circuits to hide the key information of DES encryption module. It needs more than 100000 sample curves to reduce the background noise by comparing the sample space when it completely meets the time alignment requirement. In additional, an external trigger signal is playing a very important role to detect the hardware threat in this experiment.
Keywords: Side-channel analysis, hardware threat, register transfer level, dynamic power.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9918982 Multi-Objective Optimization Contingent on Subcarrier-Wise Beamforming for Multiuser MIMO-OFDM Interference Channels
Authors: R. Vedhapriya Vadhana, Ruba Soundar, K. G. Jothi Shalini
Abstract:
We address the problem of interference over all the channels in multiuser MIMO-OFDM systems. This paper contributes three beamforming strategies designed for multiuser multiple-input and multiple-output by way of orthogonal frequency division multiplexing, in which the transmit and receive beamformers are acquired repetitious by secure-form stages. In the principal case, the transmit (TX) beamformers remain fixed then the receive (RX) beamformers are computed. This eradicates one interference span for every user by means of extruding the transmit beamformers into a null space of relevant channels. Formerly, by gratifying the orthogonality condition to exclude the residual interferences in RX beamformer for every user is done by maximizing the signal-to-noise ratio (SNR). The second case comprises mutually optimizing the TX and RX beamformers from controlled SNR maximization. The outcomes of first case is used here. The third case also includes combined optimization of TX-RX beamformers; however, uses the both controlled SNR and signal-to-interference-plus-noise ratio maximization (SINR). By the standardized channel model for IEEE 802.11n, the proposed simulation experiments offer rapid beamforming and enhanced error performance.Keywords: Beamforming, interference channels, MIMO-OFDM, multi-objective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11258981 Investigating the Demand for Short-shelf Life Food Products for SME Wholesalers
Authors: Yamini Raju, Parminder S. Kang, Adam Moroz, Ross Clement, Ashley Hopwell, Alistair Duffy
Abstract:
Accurate forecasting of fresh produce demand is one the challenges faced by Small Medium Enterprise (SME) wholesalers. This paper is an attempt to understand the cause for the high level of variability such as weather, holidays etc., in demand of SME wholesalers. Therefore, understanding the significance of unidentified factors may improve the forecasting accuracy. This paper presents the current literature on the factors used to predict demand and the existing forecasting techniques of short shelf life products. It then investigates a variety of internal and external possible factors, some of which is not used by other researchers in the demand prediction process. The results presented in this paper are further analysed using a number of techniques to minimize noise in the data. For the analysis past sales data (January 2009 to May 2014) from a UK based SME wholesaler is used and the results presented are limited to product ‘Milk’ focused on café’s in derby. The correlation analysis is done to check the dependencies of variability factor on the actual demand. Further PCA analysis is done to understand the significance of factors identified using correlation. The PCA results suggest that the cloud cover, weather summary and temperature are the most significant factors that can be used in forecasting the demand. The correlation of the above three factors increased relative to monthly and becomes more stable compared to the weekly and daily demand.Keywords: Demand Forecasting, Deteriorating Products, Food Wholesalers, Principal Component Analysis and Variability Factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33678980 Performance of Block Codes Using the Eigenstructure of the Code Correlation Matrixand Soft-Decision Decoding of BPSK
Authors: Vitalice K. Oduol, C. Ardil
Abstract:
A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.
Keywords: bit error rate, block codes, code correlation matrix, eigenstructure, soft-decision decoding, weight vector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17808979 Prediction of the Thermal Parameters of a High-Temperature Metallurgical Reactor Using Inverse Heat Transfer
Authors: Mohamed Hafid, Marcel Lacroix
Abstract:
This study presents an inverse analysis for predicting the thermal conductivities and the heat flux of a high-temperature metallurgical reactor simultaneously. Once these thermal parameters are predicted, the time-varying thickness of the protective phase-change bank that covers the inside surface of the brick walls of a metallurgical reactor can be calculated. The enthalpy method is used to solve the melting/solidification process of the protective bank. The inverse model rests on the Levenberg-Marquardt Method (LMM) combined with the Broyden method (BM). A statistical analysis for the thermal parameter estimation is carried out. The effect of the position of the temperature sensors, total number of measurements and measurement noise on the accuracy of inverse predictions is investigated. Recommendations are made concerning the location of temperature sensors.
Keywords: Inverse heat transfer, phase change, metallurgical reactor, Levenberg–Marquardt method, Broyden method, bank thickness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16928978 A Soft Switching PWM DC-DC Boost Converter with Increased Efficiency by Using ZVT-ZCT Techniques
Authors: Yakup Sahin, Naim Suleyman Ting, Ismail Aksoy
Abstract:
In this paper, an improved active snubber cell is proposed on account of soft switching (SS) family of pulse width modulation (PWM) DC-DC converters. The improved snubber cell provides zero-voltage transition (ZVT) turn on and zero-current transition (ZCT) turn off for main switch. The snubber cell decreases EMI noise and operates with SS in a wide range of line and load voltages. Besides, all of the semiconductor devices in the converter operate with SS. There is no additional voltage and current stress on the main devices. Additionally, extra voltage stress does not occur on the auxiliary switch and its current stress is acceptable value. The improved converter has a low cost and simple structure. The theoretical analysis of converter is clarified and the operating states are given in detail. The experimental results of converter are obtained by prototype of 500 W and 100 kHz. It is observed that the experimental results and theoretical analysis of converter are suitable with each other perfectly.Keywords: Active snubber cells, DC-DC converters, zero-voltage transition, zero-current transition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19278977 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm
Authors: A. El Harraj, N. Raissouni
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.
Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20848976 Adaptive Equalization Using Controlled Equal Gain Combining for Uplink/Downlink MC-CDMA Systems
Authors: Miloud Frikel , Boubekeur Targui, Francois Hamon, Mohammed M'SAAD
Abstract:
In this paper we propose an enhanced equalization technique for multi-carrier code division multiple access (MC-CDMA). This method is based on the control of Equal Gain Combining (EGC) technique. Indeed, we introduce a new level changer to the EGC equalizer in order to adapt the equalization parameters to the channel coefficients. The optimal equalization level is, first, determined by channel training. The new approach reduces drastically the mutliuser interferences caused by interferes, without increasing the noise power. To compare the performances of the proposed equalizer, the theoretical analysis and numerical performances are given.
Keywords: MC-CDMA, Equalization, EGC, Single User Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14078975 Vibration, Lubrication and Machinery Consideration for a Mixer Gearbox Related to Iran Oil Industries
Authors: Omid A. Zargar
Abstract:
In this paper, some common gearboxes vibration analysis methods and condition monitoring systems are explained. In addition, an experimental gearbox vibration analysis is discussed through a critical case history for a mixer gearbox related to Iran oil industry. The case history also consists of gear manufacturing (machining) recommendations, lubrication condition of gearbox and machinery maintenance activities that caused reduction in noise and vibration of the gearbox. Besides some of the recent patents and innovations in gearboxes, lubrication and vibration monitoring systems explained. Finally micro pitting and surface fatigue in pinion and bevel of mentioned horizontal to vertical gearbox discussed in details.
Keywords: Gear box, condition monitoring, time wave form (TWF), fast Fourier transform (FFT), gear mesh frequency (GMF), Shock Pulse measurement (SPM), bearing condition unit (BCU), pinion, bevel gear, micro pitting, surface fatigue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42878974 A High-Frequency Low-Power Low-Pass-Filter-Based All-Current-Mirror Sinusoidal Quadrature Oscillator
Authors: A. Leelasantitham, B. Srisuchinwong
Abstract:
A high-frequency low-power sinusoidal quadrature oscillator is presented through the use of two 2nd-order low-pass current-mirror (CM)-based filters, a 1st-order CM low-pass filter and a CM bilinear transfer function. The technique is relatively simple based on (i) inherent time constants of current mirrors, i.e. the internal capacitances and the transconductance of a diode-connected NMOS, (ii) a simple negative resistance RN formed by a resistor load RL of a current mirror. Neither external capacitances nor inductances are required. As a particular example, a 1.9-GHz, 0.45-mW, 2-V CMOS low-pass-filter-based all-current-mirror sinusoidal quadrature oscillator is demonstrated. The oscillation frequency (f0) is 1.9 GHz and is current-tunable over a range of 370 MHz or 21.6 %. The power consumption is at approximately 0.45 mW. The amplitude matching and the quadrature phase matching are better than 0.05 dB and 0.15°, respectively. Total harmonic distortions (THD) are less than 0.3 %. At 2 MHz offset from the 1.9 GHz, the carrier to noise ratio (CNR) is 90.01 dBc/Hz whilst the figure of merit called a normalized carrier-to-noise ratio (CNRnorm) is 153.03 dBc/Hz. The ratio of the oscillation frequency (f0) to the unity-gain frequency (fT) of a transistor is 0.25. Comparisons to other approaches are also included.Keywords: Sinusoidal quadrature oscillator, low-pass-filterbased, current-mirror bilinear transfer function, all-current-mirror, negative resistance, low power, high frequency, low distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20698973 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method
Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen
Abstract:
This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.
Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27058972 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems
Authors: Takashi Shimizu, Tomoaki Hashimoto
Abstract:
A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.Keywords: Model predictive control, unscented Kalman filter, nonlinear systems, implicit systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9478971 ANN Based Currency Recognition System using Compressed Gray Scale and Application for Sri Lankan Currency Notes - SLCRec
Authors: D. A. K. S. Gunaratna, N. D. Kodikara, H. L. Premaratne
Abstract:
Automatic currency note recognition invariably depends on the currency note characteristics of a particular country and the extraction of features directly affects the recognition ability. Sri Lanka has not been involved in any kind of research or implementation of this kind. The proposed system “SLCRec" comes up with a solution focusing on minimizing false rejection of notes. Sri Lankan currency notes undergo severe changes in image quality in usage. Hence a special linear transformation function is adapted to wipe out noise patterns from backgrounds without affecting the notes- characteristic images and re-appear images of interest. The transformation maps the original gray scale range into a smaller range of 0 to 125. Applying Edge detection after the transformation provided better robustness for noise and fair representation of edges for new and old damaged notes. A three layer back propagation neural network is presented with the number of edges detected in row order of the notes and classification is accepted in four classes of interest which are 100, 500, 1000 and 2000 rupee notes. The experiments showed good classification results and proved that the proposed methodology has the capability of separating classes properly in varying image conditions.Keywords: Artificial intelligence, linear transformation and pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28318970 Developing a New Vibration Analysis Calculative Method for Esfahan Subway Train and Railways Design, Manufacturing, and Construction
Authors: Omid A. Zargar
Abstract:
The simulated mass and spring method evaluation for subway or railways construction and installation systems have a wide application in rail industries. This kind of design should be optimizing all related parameters to reduce the amount of vibration in cities, homelands, historical zones and other critical locations. Finite element method could help us a lot to analysis such applications with an excellent accuracy but always developing some simple, fast and user friendly evaluation method required in subway industrial applications. In addition, process parameter optimization extremely required in railway industries to achieve some optimal design of railways with maximum safety, reliability and performance. Furthermore, it is important to reduce vibrations and further related maintenance costs as well as possible. In this paper a simple but useful simulated mass and spring evaluation system developed for Esfahan subway construction. Besides, some of related recent patent and innovations in rail world industries like Suspension mass tuned vibration reducer, short sleeper vibration attenuation fastener and Airtight track vibration-noise reducing fastener discussed in details.
Keywords: Subway construction engineering, natural frequency, operation frequency, vibration analysis, polyurethane layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23588969 Rotorcraft Performance and Environmental Impact Evaluation by Multidisciplinary Modelling
Authors: Pierre-Marie Basset, Gabriel Reboul, Binh DangVu, Sébastien Mercier
Abstract:
Rotorcraft provides invaluable services thanks to their Vertical Take-Off and Landing (VTOL), hover and low speed capabilities. Yet their use is still often limited by their cost and environmental impact, especially noise and energy consumption. One of the main brakes to the expansion of the use of rotorcraft for urban missions is the environmental impact. The first main concern for the population is the noise. In order to develop the transversal competency to assess the rotorcraft environmental footprint, a collaboration has been launched between six research departments within ONERA. The progress in terms of models and methods are capitalized into the numerical workshop C.R.E.A.T.I.O.N. “Concepts of Rotorcraft Enhanced Assessment Through Integrated Optimization Network”. A typical mission for which the environmental impact issue is of great relevance has been defined. The first milestone is to perform the pre-sizing of a reference helicopter for this mission. In a second milestone, an alternate rotorcraft concept has been defined: a tandem rotorcraft with optional propulsion. The key design trends are given for the pre-sizing of this rotorcraft aiming at a significant reduction of the global environmental impact while still giving equivalent flight performance and safety with respect to the reference helicopter. The models and methods have been improved for catching sooner and more globally, the relative variations on the environmental impact when changing the rotorcraft architecture, the pre-design variables and the operation parameters.Keywords: Environmental impact, flight performance, helicopter, rotorcraft pre-sizing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14978968 Statistical Measures and Optimization Algorithms for Gene Selection in Lung and Ovarian Tumor
Authors: C. Gunavathi, K. Premalatha
Abstract:
Microarray technology is universally used in the study of disease diagnosis using gene expression levels. The main shortcoming of gene expression data is that it includes thousands of genes and a small number of samples. Abundant methods and techniques have been proposed for tumor classification using microarray gene expression data. Feature or gene selection methods can be used to mine the genes that directly involve in the classification and to eliminate irrelevant genes. In this paper statistical measures like T-Statistics, Signal-to-Noise Ratio (SNR) and F-Statistics are used to rank the genes. The ranked genes are used for further classification. Particle Swarm Optimization (PSO) algorithm and Shuffled Frog Leaping (SFL) algorithm are used to find the significant genes from the top-m ranked genes. The Naïve Bayes Classifier (NBC) is used to classify the samples based on the significant genes. The proposed work is applied on Lung and Ovarian datasets. The experimental results show that the proposed method achieves 100% accuracy in all the three datasets and the results are compared with previous works.
Keywords: Microarray, T-Statistics, Signal-to-Noise Ratio, FStatistics, Particle Swarm Optimization, Shuffled Frog Leaping, Naïve Bayes Classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19448967 Dimension Reduction of Microarray Data Based on Local Principal Component
Authors: Ali Anaissi, Paul J. Kennedy, Madhu Goyal
Abstract:
Analysis and visualization of microarraydata is veryassistantfor biologists and clinicians in the field of diagnosis and treatment of patients. It allows Clinicians to better understand the structure of microarray and facilitates understanding gene expression in cells. However, microarray dataset is a complex data set and has thousands of features and a very small number of observations. This very high dimensional data set often contains some noise, non-useful information and a small number of relevant features for disease or genotype. This paper proposes a non-linear dimensionality reduction algorithm Local Principal Component (LPC) which aims to maps high dimensional data to a lower dimensional space. The reduced data represents the most important variables underlying the original data. Experimental results and comparisons are presented to show the quality of the proposed algorithm. Moreover, experiments also show how this algorithm reduces high dimensional data whilst preserving the neighbourhoods of the points in the low dimensional space as in the high dimensional space.
Keywords: Linear Dimension Reduction, Non-Linear Dimension Reduction, Principal Component Analysis, Biologists.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15738966 Digital filters for Hot-Mix Asphalt Complex Modulus Test Data Using Genetic Algorithm Strategies
Authors: Madhav V. Chitturi, Anshu Manik, Kasthurirangan Gopalakrishnan
Abstract:
The dynamic or complex modulus test is considered to be a mechanistically based laboratory test to reliably characterize the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes used in the construction of roads. The most common observation is that the data collected from these tests are often noisy and somewhat non-sinusoidal. This hampers accurate analysis of the data to obtain engineering insight. The goal of the work presented in this paper is to develop and compare automated evolutionary computational techniques to filter test noise in the collection of data for the HMA complex modulus test. The results showed that the Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is computationally efficient for filtering data obtained from the HMA complex modulus test.Keywords: HMA, dynamic modulus, GA, evolutionarycomputation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15698965 An Enhanced SAR-Based Tsunami Detection System
Authors: Jean-Pierre Dubois, Jihad S. Daba, H. Karam, J. Abdallah
Abstract:
Tsunami early detection and warning systems have proved to be of ultimate importance, especially after the destructive tsunami that hit Japan in March 2012. Such systems are crucial to inform the authorities of any risk of a tsunami and of the degree of its danger in order to make the right decision and notify the public of the actions they need to take to save their lives. The purpose of this research is to enhance existing tsunami detection and warning systems. We first propose an automated and miniaturized model of an early tsunami detection and warning system. The model for the operation of a tsunami warning system is simulated using the data acquisition toolbox of Matlab and measurements acquired from specified internet pages due to the lack of the required real-life sensors, both seismic and hydrologic, and building a graphical user interface for the system. In the second phase of this work, we implement various satellite image filtering schemes to enhance the acquired synthetic aperture radar images of the tsunami affected region that are masked by speckle noise. This enables us to conduct a post-tsunami damage extent study and calculate the percentage damage. We conclude by proposing improvements to the existing telecommunication infrastructure of existing warning tsunami systems using a migration to IP-based networks and fiber optics links.
Keywords: Detection, GIS, GSN, GTS, GPS, speckle noise, synthetic aperture radar, tsunami, wiener filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21738964 The Principle Probabilities of Space-Distance Resolution for a Monostatic Radar and Realization in Cylindrical Array
Authors: Anatoly D. Pluzhnikov, Elena N. Pribludova, Alexander G. Ryndyk
Abstract:
In conjunction with the problem of the target selection on a clutter background, the analysis of the scanning rate influence on the spatial-temporal signal structure, the generalized multivariate correlation function and the quality of the resolution with the increase pulse repetition frequency is made. The possibility of the object space-distance resolution, which is conditioned by the range-to-angle conversion with an increased scanning rate, is substantiated. The calculations for the real cylindrical array at high scanning rate are presented. The high scanning rate let to get the signal to noise improvement of the order of 10 dB for the space-time signal processing.Keywords: Antenna pattern, array, signal processing, spatial resolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10738963 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime
Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27788962 Optimization of Laser-Induced Breakdown Spectroscopy (LIBS) for Determination of Quantum Dots (Qds) in Liquid Solutions
Authors: David Prochazka, Ľudmila Ballová, Karel Novotný, Jan Novotný, Radomír Malina, Petr Babula, Vojtěch Adam, René Kizek, Klára Procházková, Jozef Kaiser
Abstract:
Here we report on the utilization of Laser-Induced Breakdown Spectroscopy (LIBS) for determination of Quantum Dots (QDs) in liquid solution. The process of optimization of experimental conditions from choosing the carrier medium to application of colloid QDs is described. The main goal was to get the best possible signal to noise ratio. The results obtained from the measurements confirmed the capability of LIBS technique for qualitative and afterwards quantitative determination of QDs in liquid solution.Keywords: Laser-Induced Breakdown Spectroscopy, liquid analysis, nanocrystals, nanotechnology, Quantum dots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22628961 Markov Game Controller Design Algorithms
Authors: Rajneesh Sharma, M. Gopal
Abstract:
Markov games are a generalization of Markov decision process to a multi-agent setting. Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. This paper presents two novel controller design algorithms that use ideas from game-theory literature to produce reliable controllers that are able to maintain performance in presence of noise and parameter variations. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. Our approach generates an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment, and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed controller architectures attempt to improve controller reliability by a gradual mixing of algorithmic approaches drawn from the game theory literature and the Minimax-Q Markov game solution approach, in a reinforcement-learning framework. We test the proposed algorithms on a simulated Inverted Pendulum Swing-up task and compare its performance against standard Q learning.Keywords: Reinforcement learning, Markov Decision Process, Matrix Games, Markov Games, Smooth Fictitious play, Controller, Inverted Pendulum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15208960 Assessment of the Occupancy’s Effect on Speech Intelligibility in Al-Madinah Holy Mosque
Authors: Wasim Orfali, Hesham Tolba
Abstract:
This research investigates the acoustical characteristics of Al-Madinah Holy Mosque. Extensive field measurements were conducted in different locations of Al-Madinah Holy Mosque to characterize its acoustic characteristics. The acoustical characteristics are usually evaluated by the use of objective parameters in unoccupied rooms due to practical considerations. However, under normal conditions, the room occupancy can vary such characteristics due to the effect of the additional sound absorption present in the room or by the change in signal-to-noise ratio. Based on the acoustic measurements carried out in Al-Madinah Holy Mosque with and without occupancy, and the analysis of such measurements, the existence of acoustical deficiencies has been confirmed.Keywords: Worship sound, Al-Madinah Holy Mosque, mosque acoustics, speech intelligibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7298959 Chaotic Properties of Hemodynamic Responsein Functional Near Infrared Spectroscopic Measurement of Brain Activity
Authors: Ni Ni Soe , Masahiro Nakagawa
Abstract:
Functional near infrared spectroscopy (fNIRS) is a practical non-invasive optical technique to detect characteristic of hemoglobin density dynamics response during functional activation of the cerebral cortex. In this paper, fNIRS measurements were made in the area of motor cortex from C4 position according to international 10-20 system. Three subjects, aged 23 - 30 years, were participated in the experiment. The aim of this paper was to evaluate the effects of different motor activation tasks of the hemoglobin density dynamics of fNIRS signal. The chaotic concept based on deterministic dynamics is an important feature in biological signal analysis. This paper employs the chaotic properties which is a novel method of nonlinear analysis, to analyze and to quantify the chaotic property in the time series of the hemoglobin dynamics of the various motor imagery tasks of fNIRS signal. Usually, hemoglobin density in the human brain cortex is found to change slowly in time. An inevitable noise caused by various factors is to be included in a signal. So, principle component analysis method (PCA) is utilized to remove high frequency component. The phase pace is reconstructed and evaluated the Lyapunov spectrum, and Lyapunov dimensions. From the experimental results, it can be conclude that the signals measured by fNIRS are chaotic.Keywords: Chaos, hemoglobin, Lyapunov spectrum, motorimagery, near infrared spectroscopy (NIRS), principal componentanalysis (PCA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17268958 Locating Center Points for Radial Basis Function Networks Using Instance Reduction Techniques
Authors: Rana Yousef, Khalil el Hindi
Abstract:
The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Keywords: Radial basis function networks, Instance-based reduction, PNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16868957 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.
Keywords: CNC machining, Six Sigma, Surface roughness, Taguchi methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10558956 Device for 3D Analysis of Basic Movements of the Lower Extremity
Authors: Jiménez Villanueva Mayra Alejandra, Ortíz Casallas Diana Carolina, Luengas Contreras Lely Adriana
Abstract:
This document details the process of developing a wireless device that captures the basic movements of the foot (plantar flexion, dorsal flexion, abduction, adduction.), and the knee movement (flexion). It implements a motion capture system by using a hardware based on optical fiber sensors, due to the advantages in terms of scope, noise immunity and speed of data transmission and reception. The operating principle used by this system is the detection and transmission of joint movement by mechanical elements and their respective measurement by optical ones (in this case infrared). Likewise, Visual Basic software is used for reception, analysis and signal processing of data acquired by the device, generating a 3D graphical representation in real time of each movement. The result is a boot in charge of capturing the movement, a transmission module (Implementing Xbee Technology) and a receiver module for receiving information and sending it to the PC for their respective processing. The main idea with this device is to help on topics such as bioengineering and medicine, by helping to improve the quality of life and movement analysis.Keywords: abduction, adduction, A / D converter, Autodesk 3DMax, Infrared Diode, Driver, extension, flexion, Infrared LEDs, Interface, Modeling OPENGL, Optical Fiber, USB CDC(Communications Device Class), Virtual Reality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16938955 Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech
Authors: Panikos Heracleous
Abstract:
In this study, the use of silicon NAM (Non-Audible Murmur) microphone in automatic speech recognition is presented. NAM microphones are special acoustic sensors, which are attached behind the talker-s ear and can capture not only normal (audible) speech, but also very quietly uttered speech (non-audible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech conversion etc.) for sound-impaired people. Using a small amount of training data and adaptation approaches, 93.9% word accuracy was achieved for a 20k Japanese vocabulary dictation task. Non-audible murmur recognition in noisy environments is also investigated. In this study, further analysis of the NAM speech has been made using distance measures between hidden Markov model (HMM) pairs. It has been shown the reduced spectral space of NAM speech using a metric distance, however the location of the different phonemes of NAM are similar to the location of the phonemes of normal speech, and the NAM sounds are well discriminated. Promising results in using nonlinear features are also introduced, especially under noisy conditions.Keywords: Speech recognition, unvoiced speech, nonlinear features, HMM distance measures
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16468954 Reducing Weight and Fuel Consumption of Civil Aircraft by EML
Authors: L. Bertola, T. Cox, P. Wheeler, S. Garvey, H. Morvan
Abstract:
Electromagnetic Launch (EML) systems have been proposed for military applications to accelerate jet planes on aircraft carriers. This paper proposes the implementation of similar technology to aid civil aircraft take-off, which can provide significant economic, environmental and technical benefits. Assisted launch has the potential of reducing on ground noise and emissions near airports and improving overall aircraft efficiency through reducing engine thrust requirements. This paper presents a take-off performance analysis for an Airbus A320-200 taking off with and without the assistance of the electromagnetic catapult. Assisted take-off allows for a significant reduction in take-off field length, giving more capacity with existing airport footprints and reducing the necessary footprint of new airports, which will both reduce costs and increase the number of suitable sites. The electromagnetic catapult may allow the installation of smaller engines with lower rated thrust. The consequent fuel consumption and operational cost reduction is estimated. The potential of reducing the aircraft operational costs and the runway length required make EML system an attractive solution to the air traffic growth in busy airports.
Keywords: EML system, fuel consumption, take-off analysis, weight reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088