Search results for: high resolution array processing techniques
8339 Process Optimisation for Internal Cylindrical Rough Turning of Nickel Alloy 625 Weld Overlay
Authors: Lydia Chan, Islam Shyha, Dale Dreyer, John Hamilton, Phil Hackney
Abstract:
Nickel-based superalloys are generally known to be difficult to cut due to their strength, low thermal conductivity, and high work hardening tendency. Superalloy such as alloy 625 is often used in the oil and gas industry as a surfacing material to provide wear and corrosion resistance to components. The material is typically applied onto a metallic substrate through weld overlay cladding, an arc welding technique. Cladded surfaces are always rugged and carry a tough skin; this creates further difficulties to the machining process. The present work utilised design of experiment to optimise the internal cylindrical rough turning for weld overlay surfaces. An L27 orthogonal array was used to assess effects of the four selected key process variables: cutting insert, depth of cut, feed rate, and cutting speed. The optimal cutting conditions were determined based on productivity and the level of tool wear.Keywords: Cylindrical turning, nickel superalloy, turning of overlay, weld overlay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9278338 A Study on Early Prediction of Fault Proneness in Software Modules using Genetic Algorithm
Authors: Parvinder S. Sandhu, Sunil Khullar, Satpreet Singh, Simranjit K. Bains, Manpreet Kaur, Gurvinder Singh
Abstract:
Fault-proneness of a software module is the probability that the module contains faults. To predict faultproneness of modules different techniques have been proposed which includes statistical methods, machine learning techniques, neural network techniques and clustering techniques. The aim of proposed study is to explore whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using Genetic Algorithm technique. This approach has been tested with real time defect C Programming language datasets of NASA software projects. The results show that the fusion of requirement and code metric is the best prediction model for detecting the faults as compared with commonly used code based model.Keywords: Genetic Algorithm, Fault Proneness, Software Faultand Software Quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19848337 A Comparative Study of Malware Detection Techniques Using Machine Learning Methods
Authors: Cristina Vatamanu, Doina Cosovan, Dragoş Gavriluţ, Henri Luchian
Abstract:
In the past few years, the amount of malicious software increased exponentially and, therefore, machine learning algorithms became instrumental in identifying clean and malware files through (semi)-automated classification. When working with very large datasets, the major challenge is to reach both a very high malware detection rate and a very low false positive rate. Another challenge is to minimize the time needed for the machine learning algorithm to do so. This paper presents a comparative study between different machine learning techniques such as linear classifiers, ensembles, decision trees or various hybrids thereof. The training dataset consists of approximately 2 million clean files and 200.000 infected files, which is a realistic quantitative mixture. The paper investigates the above mentioned methods with respect to both their performance (detection rate and false positive rate) and their practicability.Keywords: Detection Rate, False Positives, Perceptron, One Side Class, Ensembles, Decision Tree, Hybrid methods, Feature Selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32818336 Conservation and Repair Works for Traditional Timber Mosque in Malaysia: A Review on Techniques
Authors: N.K.F. Mustafa, S. Johar, A.G. Ahmad, S.H. Zulkarnain, M.Y. A. Rahman, A.I. Che Ani
Abstract:
Building life cycle will never be excused from the existence of defects and deterioration. They are common problems in building, existed in newly build or in aged building. Buildings constructed from wood are indeed affected by its agent and serious defects and damages can reduce values to a building. In repair works, it is important to identify the causes and repair techniques that best suites with the condition. This paper reviews the conservation of traditional timber mosque in Malaysia comprises the concept, principles and approaches of mosque conservation in general. As in conservation practice, wood in historic building can be conserved by using various restoration and conservation techniques which this can be grouped as Fully and Partial Replacement, Mechanical Reinforcement, Consolidation by Impregnation and Reinforcement, Removing Paint and also Preservation of Wood and Control Insect Invasion, as to prolong and extended the function of a timber in a building. It resulted that the common techniques adopted in timber mosque conservation are from the conventional ways and the understanding of the repair technique requires the use of only preserve wood to prevent the future immature defects.
Keywords: Building conservation, conservation principles, repair works, traditional timber mosque.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36868335 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector
Authors: H. Aldousari, T. Buchacher, N. M. Spyrou
Abstract:
Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.
Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18918334 3D Point Cloud Model Color Adjustment by Combining Terrestrial Laser Scanner and Close Range Photogrammetry Datasets
Authors: M. Pepe, S. Ackermann, L. Fregonese, C. Achille
Abstract:
3D models obtained with advanced survey techniques such as close-range photogrammetry and laser scanner are nowadays particularly appreciated in Cultural Heritage and Archaeology fields. In order to produce high quality models representing archaeological evidences and anthropological artifacts, the appearance of the model (i.e. color) beyond the geometric accuracy, is not a negligible aspect. The integration of the close-range photogrammetry survey techniques with the laser scanner is still a topic of study and research. By combining point cloud data sets of the same object generated with both technologies, or with the same technology but registered in different moment and/or natural light condition, could construct a final point cloud with accentuated color dissimilarities. In this paper, a methodology to uniform the different data sets, to improve the chromatic quality and to highlight further details by balancing the point color will be presented.
Keywords: Color models, cultural heritage, laser scanner, photogrammetry, point cloud color.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16318333 Numerical Investigation for External Strengthening of Dapped-End Beams
Authors: A. Abdel-Moniem, H. Madkour, K. Farah, A. Abdullah
Abstract:
The reduction in dapped end beams depth nearby the supports tends to produce stress concentration and hence results in shear cracks, if it does not have an adequate reinforcement detailing. This study investigates numerically the efficiency of applying different external strengthening techniques to the dapped end of such beams. A two-dimensional finite element model was built to predict the structural behavior of dapped ends strengthened with different techniques. The techniques included external bonding of the steel angle at the re-entrant corner, un-bounded bolt anchoring, external steel plate jacketing, exterior carbon fiber wrapping and/or stripping and external inclined steel plates. The FE analysis results are then presented in terms of the ultimate load capacities, load-deflection and crack pattern at failure. The results showed that the FE model, at various stages, was found to be comparable to the available test data. Moreover, it enabled the capture of the failure progress, with acceptable accuracy, which is very difficult in a laboratory test.Keywords: Dapped-end beams, finite element, shear failure, strengthening techniques, reinforced concrete, numerical investigation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10038332 Envelope-Wavelet Packet Transform for Machine Condition Monitoring
Authors: M. F. Yaqub, I. Gondal, J. Kamruzzaman
Abstract:
Wavelet transform has been extensively used in machine fault diagnosis and prognosis owing to its strength to deal with non-stationary signals. The existing Wavelet transform based schemes for fault diagnosis employ wavelet decomposition of the entire vibration frequency which not only involve huge computational overhead in extracting the features but also increases the dimensionality of the feature vector. This increase in the dimensionality has the tendency to 'over-fit' the training data and could mislead the fault diagnostic model. In this paper a novel technique, envelope wavelet packet transform (EWPT) is proposed in which features are extracted based on wavelet packet transform of the filtered envelope signal rather than the overall vibration signal. It not only reduces the computational overhead in terms of reduced number of wavelet decomposition levels and features but also improves the fault detection accuracy. Analytical expressions are provided for the optimal frequency resolution and decomposition level selection in EWPT. Experimental results with both actual and simulated machine fault data demonstrate significant gain in fault detection ability by EWPT at reduced complexity compared to existing techniques.Keywords: Envelope Detection, Wavelet Transform, Bearing Faults, Machine Health Monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19588331 Osmotic Dehydration of Beetroot in Salt Solution: Optimization of Parameters through Statistical Experimental Design
Authors: P. Manivannan, M. Rajasimman
Abstract:
Response surface methodology was used for quantitative investigation of water and solids transfer during osmotic dehydration of beetroot in aqueous solution of salt. Effects of temperature (25 – 45oC), processing time (30–150 min), salt concentration (5–25%, w/w) and solution to sample ratio (5:1 – 25:1) on osmotic dehydration of beetroot were estimated. Quadratic regression equations describing the effects of these factors on the water loss and solids gain were developed. It was found that effects of temperature and salt concentrations were more significant on the water loss than the effects of processing time and solution to sample ratio. As for solids gain processing time and salt concentration were the most significant factors. The osmotic dehydration process was optimized for water loss, solute gain, and weight reduction. The optimum conditions were found to be: temperature – 35oC, processing time – 90 min, salt concentration – 14.31% and solution to sample ratio 8.5:1. At these optimum values, water loss, solid gain and weight reduction were found to be 30.86 (g/100 g initial sample), 9.43 (g/100 g initial sample) and 21.43 (g/100 g initial sample) respectively.Keywords: Optimization, Osmotic dehydration, Beetroot, saltsolution, response surface methodology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34598330 Active Contours with Prior Corner Detection
Authors: U.A.A. Niroshika, Ravinda G.N. Meegama
Abstract:
Deformable active contours are widely used in computer vision and image processing applications for image segmentation, especially in biomedical image analysis. The active contour or “snake" deforms towards a target object by controlling the internal, image and constraint forces. However, if the contour initialized with a lesser number of control points, there is a high probability of surpassing the sharp corners of the object during deformation of the contour. In this paper, a new technique is proposed to construct the initial contour by incorporating prior knowledge of significant corners of the object detected using the Harris operator. This new reconstructed contour begins to deform, by attracting the snake towards the targeted object, without missing the corners. Experimental results with several synthetic images show the ability of the new technique to deal with sharp corners with a high accuracy than traditional methods.Keywords: Active Contours, Image Segmentation, Harris Operator, Snakes
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22818329 A Comparative Study of Various Tone Mapping Methods
Authors: YasirSalih, AamirSaeed Malik, Wazirahbt.Md-Esa
Abstract:
In the recent years, high dynamic range imaging has gain popularity with the advancement in digital photography. In this contribution we present a subjective evaluation of various tone production and tone mapping techniques by a number of participants. Firstly, standard HDR images were used and the participants were asked to rate them based on a given rating scheme. After that, the participant was asked to rate HDR image generated using linear and nonlinear combination approach of multiple exposure images. The experimental results showed that linearly generated HDR images have better visualization than the nonlinear combined ones. In addition, Reinhard et al. and the exponential tone mapping operators have shown better results compared to logarithmic and the Garrett et al. tone mapping operators.Keywords: tone mapping, high dynamic range, low dynamic range, bits per pixel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33528328 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)
Authors: Ahmad Kayvani Fard, Yehia Manawi
Abstract:
Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.
Keywords: Membrane Distillation, Waste Heat, Seawater Desalination, Membrane, Freshwater, Direct Contact Membrane Distillation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41518327 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues
Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid
Abstract:
New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.Keywords: Information visualization, visual analytics, text mining, visual text analytics tools, big data visualization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10028326 A New Approach to Signal Processing for DC-Electromagnetic Flowmeters
Authors: Michael Schukat
Abstract:
Electromagnetic flowmeters with DC excitation are used for a wide range of fluid measurement tasks, but are rarely found in dosing applications with short measurement cycles due to the achievable accuracy. This paper will identify a number of factors that influence the accuracy of this sensor type when used for short-term measurements. Based on these results a new signal-processing algorithm will be described that overcomes the identified problems to some extend. This new method allows principally a higher accuracy of electromagnetic flowmeters with DC excitation than traditional methods.
Keywords: Electromagnetic Flowmeter, Kalman Filter, ShortMeasurement Cycles, Signal Estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16138325 Langmuir–Blodgett Films of Polyaniline for Efficient Detection of Uric Acid
Authors: Kashima Arora, Monika Tomar, Vinay Gupta
Abstract:
Langmuir–Blodgett (LB) films of polyaniline (PANI) grown onto ITO coated glass substrates were utilized for the fabrication of Uric acid biosensor for efficient detection of uric acid by immobilizing Uricase via EDC–NHS coupling. The modified electrodes were characterized by atomic force microscopy (AFM). The response characteristics after immobilization of uricase were studied using cyclic voltammetry and electrochemical impedance spectroscopy techniques. The uricase/PANI/ITO/glass bioelectrode studied by CV and EIS techniques revealed detection of uric acid in a wide range of 0.05 mM to 1.0 mM, covering the physiological range in blood. A low Michaelis–Menten constant (Km) of 0.21 mM indicates the higher affinity of immobilized Uricase towards its analyte (uric acid). The fabricated uric acid biosensor based on PANI LB films exhibits excellent sensitivity of 0.21 mA/mM with a response time of 4 s, good reproducibility, long shelf life (8 weeks) and high selectivity.
Keywords: Uric acid; biosensor, PANI, Langmuir Blodgett films deposition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21368324 Quantitative Precipitation Forecast using MM5 and WRF models for Kelantan River Basin
Authors: Wardah, T., Kamil, A.A., Sahol Hamid, A.B., Maisarah, W.W.I
Abstract:
Quantitative precipitation forecast (QPF) from atmospheric model as input to hydrological model in an integrated hydro-meteorological flood forecasting system has been operational in many countries worldwide. High-resolution numerical weather prediction (NWP) models with grid cell sizes between 2 and 14 km have great potential in contributing towards reasonably accurate QPF. In this study the potential of two NWP models to forecast precipitation for a flood-prone area in a tropical region is examined. The precipitation forecasts produced from the Fifth Generation Penn State/NCAR Mesoscale (MM5) and Weather Research and Forecasting (WRF) models are statistically verified with the observed rain in Kelantan River Basin, Malaysia. The statistical verification indicates that the models have performed quite satisfactorily for low and moderate rainfall but not very satisfactory for heavy rainfall.Keywords: MM5, Numerical weather prediction (NWP), quantitative precipitation forecast (QPF), WRF
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29308323 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance
Authors: Sokkhey Phauk, Takeo Okazaki
Abstract:
The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.
Keywords: Academic performance prediction system, prediction model, educational data mining, dominant factors, feature selection methods, student performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9788322 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.
Keywords: Agent-oriented modeling, business Intelligence management, distributed data mining, multi-agent system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13748321 The Incorporation of In in GaAsN as a Means of N Fraction Calibration
Authors: H. Hashim, B. F. Usher
Abstract:
InGaAsN and GaAsN epitaxial layers with similar nitrogen compositions in a sample were successfully grown on a GaAs (001) substrate by solid source molecular beam epitaxy. An electron cyclotron resonance nitrogen plasma source has been used to generate atomic nitrogen during the growth of the nitride layers. The indium composition changed from sample to sample to give compressive and tensile strained InGaAsN layers. Layer characteristics have been assessed by high-resolution x-ray diffraction to determine the relationship between the lattice constant of the GaAs1-yNy layer and the fraction x of In. The objective was to determine the In fraction x in an InxGa1-xAs1-yNy epitaxial layer which exactly cancels the strain present in a GaAs1-yNy epitaxial layer with the same nitrogen content when grown on a GaAs substrate.Keywords: Indium, molecular beam epitaxy, nitrogen, straincancellation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14138320 XML Data Management in Compressed Relational Database
Authors: Hongzhi Wang, Jianzhong Li, Hong Gao
Abstract:
XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..Keywords: XML, compression, query processing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18068319 A Performance Comparison of Golay and Reed-Muller Coded OFDM Signal for Peak-to-Average Power Ratio Reduction
Authors: Sanjay Singh, M Sathish Kumar, H. S Mruthyunjaya
Abstract:
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication systems. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. A major drawback of OFDM is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal which can significantly impact the performance of the power amplifier. In this paper we have compared the PAPR reduction performance of Golay and Reed-Muller coded OFDM signal. From our simulation it has been found that the PAPR reduction performance of Golay coded OFDM is better than the Reed-Muller coded OFDM signal. Moreover, for the optimum PAPR reduction performance, code configuration for Golay and Reed-Muller codes has been identified.Keywords: OFDM, PAPR, Perfect Codes, Golay Codes, Reed-Muller Codes
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17898318 Design of Low Power and High Speed Digital IIR Filter in 45nm with Optimized CSA for Digital Signal Processing Applications
Authors: G. Ramana Murthy, C. Senthilpari, P. Velrajkumar, Lim Tien Sze
Abstract:
In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Keywords: CSA Full Adder, Delay unit, IIR filter, Low-Power, PDP, Parametric Analysis, Propagation Delay, Throughput, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38158317 Face Tracking using a Polling Strategy
Authors: Rodrigo Montufar-Chaveznava
Abstract:
The colors of the human skin represent a special category of colors, because they are distinctive from the colors of other natural objects. This category is found as a cluster in color spaces, and the skin color variations between people are mostly due to differences in the intensity. Besides, the face detection based on skin color detection is a faster method as compared to other techniques. In this work, we present a system to track faces by carrying out skin color detection in four different color spaces: HSI, YCbCr, YES and RGB. Once some skin color regions have been detected for each color space, we label each and get some characteristics such as size and position. We are supposing that a face is located in one the detected regions. Next, we compare and employ a polling strategy between labeled regions to determine the final region where the face effectively has been detected and located.Keywords: Tracking, face detection, image processing, colorspaces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15798316 Space Vector Pulse Width Modulation Technique Based Design and Simulation of a Three-Phase Voltage Source Converter Systems
Authors: Farhan Beg
Abstract:
A Space Vector based Pulse Width Modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The results show that the proposed model demonstrates better performance and properties compared to the traditional SPWM method and the method improves the dynamic performance of the closed loop drastically. For the Space Vector based Pulse Width Modulation, Sine signal is the reference waveform and triangle waveform is the carrier waveform. When the value sine signal is large than triangle signal, the pulse will start produce to high. And then when the triangular signals higher than sine signal, the pulse will come to low. SPWM output will changed by changing the value of the modulation index and frequency used in this system to produce more pulse width. The more pulse width produced, the output voltage will have lower harmonics contents and the resolution increase.
Keywords: Power Factor, SVPWM, PWM rectifier, SPWM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40238315 Modern Pedagogy Techniques for DC Motor Speed Control
Authors: Rajesh Kumar, Roopali Dogra, Puneet Aggarwal
Abstract:
Based on a survey conducted for second and third year students of the electrical engineering department at Maharishi Markandeshwar University, India, it was found that around 92% of students felt that it would be better to introduce a virtual environment for laboratory experiments. Hence, a need was felt to perform modern pedagogy techniques for students which consist of a virtual environment using MATLAB/Simulink. In this paper, a virtual environment for the speed control of a DC motor is performed using MATLAB/Simulink. The various speed control methods for the DC motor include the field resistance control method and armature voltage control method. The performance analysis of the DC motor is hence analyzed.
Keywords: Pedagogy techniques, speed control, virtual environment, DC motor, field control, voltage control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18788314 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis
Authors: A.K. Tangirala, S. Babji
Abstract:
In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15348313 Low Cost Surface Electromyographic Signal Amplifier Based On Arduino Microcontroller
Authors: Igor Luiz Bernardes de Moura, Luan Carlos de Sena Monteiro Ozelim, Fabiano Araujo Soares
Abstract:
The development of an low cost acquisition system of S-EMG signals which are reliable, comfortable for the user and with high mobility shows to be a relevant proposition in modern biomedical engineering scenario. In the study, the sampling capacity of the Arduino microcontroller Atmel Atmega328 with an A / D converter with 10-bit resolution and its reconstructing capability of a signal of surface electromyography is analyzed. An electronic circuit to capture the signal through two differential channels was designed, signals from Biceps Brachialis of a healthy man of 21 years was acquired to test the system prototype. ARV, MDF, MNF and RMS estimators were used to compare de acquired signals with physiological values. The Arduino was configured with a sampling frequency of 1.5kHz for each channel, and the tests with the circuit designed offered a SNR of 20.57dB.
Keywords: Eletromyography, Arduino, Low-Cost, Atmel Atmega328 microcontroller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49428312 Energy Management Techniques in Mobile Robots
Authors: G. Gurguze, I. Turkoglu
Abstract:
Today, the developing features of technological tools with limited energy resources have made it necessary to use energy efficiently. Energy management techniques have emerged for this purpose. As with every field, energy management is vital for robots that are being used in many areas from industry to daily life and that are thought to take up more spaces in the future. Particularly, effective power management in autonomous and multi robots, which are getting more complicated and increasing day by day, will improve the performance and success. In this study, robot management algorithms, usage of renewable and hybrid energy sources, robot motion patterns, robot designs, sharing strategies of workloads in multiple robots, road and mission planning algorithms are discussed for efficient use of energy resources by mobile robots. These techniques have been evaluated in terms of efficient use of existing energy resources and energy management in robots.
Keywords: Energy management, mobile robot, robot administration, robot management, robot planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15708311 The Effect of Glass Thickness on Stress in Vacuum Glazing
Authors: Farid Arya, Trevor Hyde, Andrea Trevisi, Paolo Basso, Danilo Bardaro
Abstract:
Heat transfer through multiple pane windows can be reduced by creating a vacuum pressure less than 0.1 Pa between the glass panes, with low emittance coatings on one or more of the internal surfaces. Fabrication of vacuum glazing (VG) requires the formation of a hermetic seal around the periphery of the glass panes together with an array of support pillars between the panes to prevent them from touching under atmospheric pressure. Atmospheric pressure and temperature differentials induce stress which can affect the integrity of the glazing. Several parameters define the stresses in VG including the glass thickness, pillar specifications, glazing dimensions and edge seal configuration. Inherent stresses in VG can result in fractures in the glass panes and failure of the edge seal. In this study, stress in VG with different glass thicknesses is theoretically studied using Finite Element Modelling (FEM). Based on the finding in this study, suggestions are made to address problems resulting from the use of thinner glass panes in the fabrication of VG. This can lead to the development of high performance, light and thin VG.Keywords: ABAQUS, glazing, stress, vacuum glazing, vacuum insulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8558310 An Overview of Handoff Techniques in Cellular Networks
Authors: Nasıf Ekiz, Tara Salih, Sibel Küçüköner, Kemal Fidanboylu
Abstract:
Continuation of an active call is one of the most important quality measurements in the cellular systems. Handoff process enables a cellular system to provide such a facility by transferring an active call from one cell to another. Different approaches are proposed and applied in order to achieve better handoff service. The principal parameters used to evaluate handoff techniques are: forced termination probability and call blocking probability. The mechanisms such as guard channels and queuing handoff calls decrease the forced termination probability while increasing the call blocking probability. In this paper we present an overview about the issues related to handoff initiation and decision and discuss about different types of handoff techniques available in the literature.
Keywords: Handoff, Forced Termination Probability, Blocking probability, Handoff Initiation, Handoff Decision, Handoff Prioritization Schemes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6615