Search results for: noise sensing circuit
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1707

Search results for: noise sensing circuit

177 Discrete Polyphase Matched Filtering-based Soft Timing Estimation for Mobile Wireless Systems

Authors: Thomas O. Olwal, Michael A. van Wyk, Barend J. van Wyk

Abstract:

In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.

Keywords: discrete polyphase matched filters, maximum likelihood estimators, soft timing phase estimation, wireless mobile systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
176 Robotics and Embedded Systems Applied to the Buried Pipeline Inspection

Authors: Robson C. Santos, Julio C. P. Ribeiro, Iorran M. de Castro, Luan C. F. Rodrigues, Sandro R. L. Silva, Diego M. Quesada

Abstract:

The work aims to develop a robot in the form of autonomous vehicle to detect, inspection and mapping of underground pipelines through the ATmega328 Arduino platform. Hardware prototyping is very similar to C / C ++ language that facilitates its use in robotics open source, resembles PLC used in large industrial processes. The robot will traverse the surface independently of direct human action, in order to automate the process of detecting buried pipes, guided by electromagnetic induction. The induction comes from coils that send the signal to the Arduino microcontroller contained in that will make the difference in intensity and the treatment of the information, and then this determines actions to electrical components such as relays and motors, allowing the prototype to move on the surface and getting the necessary information. This change of direction is performed by a stepper motor with a servo motor. The robot was developed by electrical and electronic assemblies that allowed test your application. The assembly is made up of metal detector coils, circuit boards and microprocessor, which interconnected circuits previously developed can determine, process control and mechanical actions for a robot (autonomous car) that will make the detection and mapping of buried pipelines plates. This type of prototype can prevent and identifies possible landslides and they can prevent the buried pipelines suffer an external pressure on the walls with the possibility of oil leakage and thus pollute the environment.

Keywords: Robotic, metal detector, embedded system, pipeline.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160
175 Highly Accurate Target Motion Compensation Using Entropy Function Minimization

Authors: Amin Aghatabar Roodbary, Mohammad Hassan Bastani

Abstract:

One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.

Keywords: ATR, HRRP, motion compensation, SFW, TMP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 657
174 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic generation, primal-dual interior point method, three-phase optimal power flow, unbalanced system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
173 Reversible Binary Arithmetic for Integrated Circuit Design

Authors: D. Krishnaveni, M. Geetha Priya

Abstract:

Application of reversible logic in integrated circuits results in the improved optimization of power consumption. This technology can be put into use in a variety of low power applications such as quantum computing, optical computing, nano-technology, and Complementary Metal Oxide Semiconductor (CMOS) Very Large Scale Integrated (VLSI) design etc. Logic gates are the basic building blocks in the design of any logic network and thus integrated circuits. In this paper, reversible Dual Key Gate (DKG) and Dual key Gate Pair (DKGP) gates that work singly as full adder/full subtractor are used to realize the basic building blocks of logic circuits. Reversible full adder/subtractor and parallel adder/ subtractor are designed using other reversible gates available in the literature and compared with that of DKG & DKGP gates. Efficient performance of reversible logic circuits relies on the optimization of the key parameters viz number of constant inputs, garbage outputs and number of reversible gates. The full adder/subtractor and parallel adder/subtractor design with reversible DKGP and DKG gates results in least number of constant inputs, garbage outputs, and number of reversible gates compared to the other designs. Thus, this paper provides a threshold to build more complex arithmetic systems using these reversible logic gates, leading to the enhanced performance of computing systems.

Keywords: Low power CMOS, quantum computing, reversible logic gates, full adder, full subtractor, parallel adder/subtractor, basic gates, universal gates.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1437
172 Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil

Authors: Andressa S. T. Gomes, Luiza A. Souza, Luciana H. Yamane, Renato R. Siman

Abstract:

The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.

Keywords: Solid waste, waste of electric and electronic equipment, waste management, institutional generation of solid waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
171 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
170 QoS Improvement Using Intelligent Algorithm under Dynamic Tropical Weather for Earth-Space Satellite Applications

Authors: Joseph S. Ojo, Vincent A. Akpan, Oladayo G. Ajileye, Olalekan L, Ojo

Abstract:

In this paper, the intelligent algorithm (IA) that is capable of adapting to dynamical tropical weather conditions is proposed based on fuzzy logic techniques. The IA effectively interacts with the quality of service (QoS) criteria irrespective of the dynamic tropical weather to achieve improvement in the satellite links. To achieve this, an adaptive network-based fuzzy inference system (ANFIS) has been adopted. The algorithm is capable of interacting with the weather fluctuation to generate appropriate improvement to the satellite QoS for efficient services to the customers. 5-year (2012-2016) rainfall rate of one-minute integration time series data has been used to derive fading based on ITU-R P. 618-12 propagation models. The data are obtained from the measurement undertaken by the Communication Research Group (CRG), Physics Department, Federal University of Technology, Akure, Nigeria. The rain attenuation and signal-to-noise ratio (SNR) were derived for frequency between Ku and V-band and propagation angle with respect to different transmitting power. The simulated results show a substantial reduction in SNR especially for application in the area of digital video broadcast-second generation coding modulation satellite networks.

Keywords: Fuzzy logic, intelligent algorithm, Nigeria, QoS, satellite applications, tropical weather.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 818
169 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method

Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen

Abstract:

This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.

Keywords: Design of Experiment, Taguchi Design, Optimization, Analysis of Variance, Machining Parameters, Horizontal Boring Tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706
168 Optimization of Process Parameters of Pressure Die Casting using Taguchi Methodology

Authors: Satish Kumar, Arun Kumar Gupta, Pankaj Chandna

Abstract:

The present work analyses different parameters of pressure die casting to minimize the casting defects. Pressure diecasting is usually applied for casting of aluminium alloys. Good surface finish with required tolerances and dimensional accuracy can be achieved by optimization of controllable process parameters such as solidification time, molten temperature, filling time, injection pressure and plunger velocity. Moreover, by selection of optimum process parameters the pressure die casting defects such as porosity, insufficient spread of molten material, flash etc. are also minimized. Therefore, a pressure die casting component, carburetor housing of aluminium alloy (Al2Si2O5) has been considered. The effects of selected process parameters on casting defects and subsequent setting of parameters with the levels have been accomplished by Taguchi-s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L18 orthogonal array. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the percent contribution of different process parameters. Confidence interval has also been estimated for 95% consistency level and three conformational experiments have been performed to validate the optimum level of different parameters. Overall 2.352% reduction in defects has been observed with the help of suggested optimum process parameters.

Keywords: Aluminium Casting, Pressure Die Casting, Taguchi Methodology, Design of Experiments

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7335
167 Modeling of a UAV Longitudinal Dynamics through System Identification Technique

Authors: Asadullah I. Qazi, Mansoor Ahsan, Zahir Ashraf, Uzair Ahmad

Abstract:

System identification of an Unmanned Aerial Vehicle (UAV), to acquire its mathematical model, is a significant step in the process of aircraft flight automation. The need for reliable mathematical model is an established requirement for autopilot design, flight simulator development, aircraft performance appraisal, analysis of aircraft modifications, preflight testing of prototype aircraft and investigation of fatigue life and stress distribution etc.  This research is aimed at system identification of a fixed wing UAV by means of specifically designed flight experiment. The purposely designed flight maneuvers were performed on the UAV and aircraft states were recorded during these flights. Acquired data were preprocessed for noise filtering and bias removal followed by parameter estimation of longitudinal dynamics transfer functions using MATLAB system identification toolbox. Black box identification based transfer function models, in response to elevator and throttle inputs, were estimated using least square error   technique. The identification results show a high confidence level and goodness of fit between the estimated model and actual aircraft response.

Keywords: Black box modeling, fixed wing aircraft, least square error, longitudinal dynamics, system identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1138
166 Subjective Evaluation of Spectral and Time Domain Cascading Algorithm for Speech Enhancement for Mobile Communication

Authors: Harish Chander, Balwinder Singh, Ravinder Khanna

Abstract:

In this paper, we present the comparative subjective analysis of Improved Minima Controlled Recursive Averaging (IMCRA) Algorithm, the Kalman filter and the cascading of IMCRA and Kalman filter algorithms. Performance of speech enhancement algorithms can be predicted in two different ways. One is the objective method of evaluation in which the speech quality parameters are predicted computationally. The second is a subjective listening test in which the processed speech signal is subjected to the listeners who judge the quality of speech on certain parameters. The comparative objective evaluation of these algorithms was analyzed in terms of Global SNR, Segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) by the authors and it was reported that with cascaded algorithms there is a substantial increase in objective parameters. Since subjective evaluation is the real test to judge the quality of speech enhancement algorithms, the authenticity of superiority of cascaded algorithms over individual IMCRA and Kalman algorithms is tested through subjective analysis in this paper. The results of subjective listening tests have confirmed that the cascaded algorithms perform better under all types of noise conditions.

Keywords: Speech enhancement, spectral domain, time domain, PESQ, subjective analysis, objective analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1231
165 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins

Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante

Abstract:

Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.

Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
164 Rapid Determination of Biochemical Oxygen Demand

Authors: Mayur Milan Kale, Indu Mehrotra

Abstract:

Biochemical Oxygen Demand (BOD) is a measure of the oxygen used in bacteria mediated oxidation of organic substances in water and wastewater. Theoretically an infinite time is required for complete biochemical oxidation of organic matter, but the measurement is made over 5-days at 20 0C or 3-days at 27 0C test period with or without dilution. Researchers have worked to further reduce the time of measurement. The objective of this paper is to review advancement made in BOD measurement primarily to minimize the time and negate the measurement difficulties. Survey of literature review in four such techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated approach, luminous bacterial immobilized chip method. Basic principle, method of determination, data validation and their advantage and disadvantages have been incorporated of each of the methods. In the BOD-BARTTM method the time lag is calculated for the system to change from oxidative to reductive state. BIOSENSORS are the biological sensing element with a transducer which produces a signal proportional to the analyte concentration. Microbial species has its metabolic deficiencies. Co-immobilization of bacteria using sol-gel biosensor increases the range of substrate. In ferricyanidemediated approach, ferricyanide has been used as e-acceptor instead of oxygen. In Luminous bacterial cells-immobilized chip method, bacterial bioluminescence which is caused by lux genes was observed. Physiological responses is measured and correlated to BOD due to reduction or emission. There is a scope to further probe into the rapid estimation of BOD.

Keywords: BOD, Four methods, Rapid estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3641
163 Peak Data Rate Enhancement Using Switched Micro-Macro Diversity in Cellular Multiple-Input-Multiple-Output Systems

Authors: Jihad S. Daba, J. P. Dubois, Yvette Antar

Abstract:

With the exponential growth of cellular users, a new generation of cellular networks is needed to enhance the required peak data rates. The co-channel interference between neighboring base stations inhibits peak data rate increase. To overcome this interference, multi-cell cooperation known as coordinated multipoint transmission is proposed. Such a solution makes use of multiple-input-multiple-output (MIMO) systems under two different structures: Micro- and macro-diversity. In this paper, we study the capacity and bit error rate in cellular networks using MIMO technology. We analyse both micro- and macro-diversity schemes and develop a hybrid model that switches between macro- and micro-diversity in the case of hard handoff based on a cut-off range of signal-to-noise ratio values. We conclude that our hybrid switched micro-macro MIMO system outperforms classical MIMO systems at the cost of increased hardware and software complexity.

Keywords: Cooperative multipoint transmission, ergodic capacity, hard handoff, macro-diversity, micro-diversity, multiple-input-multiple-output systems, MIMO, orthogonal frequency division multiplexing, OFDM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091
162 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
161 Improved Rake Receiver Based On the Signal Sign Separation in Maximal Ratio Combining Technique for Ultra-Wideband Wireless Communication Systems

Authors: Rashid A. Fayadh, F. Malek, Hilal A. Fadhil, Norshafinash Saudin

Abstract:

At receiving high data rate in ultra wideband (UWB) technology for many users, there are multiple user interference and inter-symbol interference as obstacles in the multi-path reception technique. Since the rake receivers were designed to collect many resolvable paths, even more than hundred of paths. Rake receiver implementation structures have been proposed towards increasing the complexity for getting better performances in indoor or outdoor multi-path receivers by reducing the bit error rate (BER). So several rake structures were proposed in the past to reduce the number of combining and estimating of resolvable paths. To this aim, we suggested two improved rake receivers based on signal sign separation in the maximal ratio combiner (MRC), called positive-negative MRC selective rake (P-N/MRC-S-rake) and positive-negative MRC partial rake (P-N/MRC-S-rake) receivers. These receivers were introduced to reduce the complexity with less number of fingers and improving the performance with low BER. Before decision circuit, there is a comparator to compare between positive quantity and negative quantity to decide whether the transmitted bit is 1 or 0. The BER was driven by MATLAB simulation with multi-path environments for impulse radio time-hopping binary phase shift keying (TH-BPSK) modulation and the results were compared with those of conventional rake receivers.

Keywords: Selective and partial rake receivers, positive and negative signal separation, maximal ratio combiner, bit error rate performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1901
160 Spread Spectrum Code Estimationby Particle Swarm Algorithm

Authors: Vahid R. Asghari, Mehrdad Ardebilipour

Abstract:

In the context of spectrum surveillance, a new method to recover the code of spread spectrum signal is presented, while the receiver has no knowledge of the transmitter-s spreading sequence. In our previous paper, we used Genetic algorithm (GA), to recover spreading code. Although genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems, but nonetheless, by increasing the length of the code, we will often lead to an unacceptable slow convergence speed. To solve this problem we introduce Particle Swarm Optimization (PSO) into code estimation in spread spectrum communication system. In searching process for code estimation, the PSO algorithm has the merits of rapid convergence to the global optimum, without being trapped in local suboptimum, and good robustness to noise. In this paper we describe how to implement PSO as a component of a searching algorithm in code estimation. Swarm intelligence boasts a number of advantages due to the use of mobile agents. Some of them are: Scalability, Fault tolerance, Adaptation, Speed, Modularity, Autonomy, and Parallelism. These properties make swarm intelligence very attractive for spread spectrum code estimation. They also make swarm intelligence suitable for a variety of other kinds of channels. Our results compare between swarm-based algorithms and Genetic algorithms, and also show PSO algorithm performance in code estimation process.

Keywords: Code estimation, Particle Swarm Optimization(PSO), Spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
159 Complex Condition Monitoring System of Aircraft Gas Turbine Engine

Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev

Abstract:

Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.

Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
158 Sonochemically Prepared SnO2 Quantum Dots as a Selective and Low Temperature CO Sensor

Authors: S. Mosadegh Sedghi, Y. Mortazavi, A. Khodadadi, O. Alizadeh Sahraei, M. Vesali Naseh

Abstract:

In this study, a low temperature sensor highly selective to CO in presence of methane is fabricated by using 4 nm SnO2 quantum dots (QDs) prepared by sonication assisted precipitation. SnCl4 aqueous solution was precipitated by ammonia under sonication, which continued for 2 h. A part of the sample was then dried and calcined at 400°C for 1.5 h and characterized by XRD and BET. The average particle size and the specific surface area of the SnO2 QDs as well as their sensing properties were compared with the SnO2 nano-particles which were prepared by conventional sol-gel method. The BET surface area of sonochemically as-prepared product and the one calcined at 400°C after 1.5 hr are 257 m2/gr and 212 m2/gr respectively while the specific surface area for SnO2 nanoparticles prepared by conventional sol-gel method is about 80m2/gr. XRD spectra revealed pure crystalline phase of SnO2 is formed for both as-prepared and calcined samples of SnO2 QDs. However, for the sample prepared by sol-gel method and calcined at 400°C SnO crystals are detected along with those of SnO2. Quantum dots of SnO2 show exceedingly high sensitivity to CO with different concentrations of 100, 300 and 1000 ppm in whole range of temperature (25- 350°C). At 50°C a sensitivity of 27 was obtained for 1000 ppm CO, which increases to a maximum of 147 when the temperature rises to 225°C and then drops off while the maximum sensitivity for the SnO2 sample prepared by the sol-gel method was obtained at 300°C with the amount of 47.2. At the same time no sensitivity to methane is observed in whole range of temperatures for SnO2 QDs. The response and recovery times of the sensor sharply decreases with temperature, while the high selectivity to CO does not deteriorate.

Keywords: Sonochemical, SnO2 QDs, SnO2 gas sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2248
157 3D Star Skeleton for Fast Human Posture Representation

Authors: Sungkuk Chun, Kwangjin Hong, Keechul Jung

Abstract:

In this paper, we propose an improved 3D star skeleton technique, which is a suitable skeletonization for human posture representation and reflects the 3D information of human posture. Moreover, the proposed technique is simple and then can be performed in real-time. The existing skeleton construction techniques, such as distance transformation, Voronoi diagram, and thinning, focus on the precision of skeleton information. Therefore, those techniques are not applicable to real-time posture recognition since they are computationally expensive and highly susceptible to noise of boundary. Although a 2D star skeleton was proposed to complement these problems, it also has some limitations to describe the 3D information of the posture. To represent human posture effectively, the constructed skeleton should consider the 3D information of posture. The proposed 3D star skeleton contains 3D data of human, and focuses on human action and posture recognition. Our 3D star skeleton uses the 8 projection maps which have 2D silhouette information and depth data of human surface. And the extremal points can be extracted as the features of 3D star skeleton, without searching whole boundary of object. Therefore, on execution time, our 3D star skeleton is faster than the “greedy" 3D star skeleton using the whole boundary points on the surface. Moreover, our method can offer more accurate skeleton of posture than the existing star skeleton since the 3D data for the object is concerned. Additionally, we make a codebook, a collection of representative 3D star skeletons about 7 postures, to recognize what posture of constructed skeleton is.

Keywords: computer vision, gesture recognition, skeletonization, human posture representation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122
156 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism

Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le

Abstract:

This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.

Keywords: Flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613
155 Temporal Signal Processing by Inference Bayesian Approach for Detection of Abrupt Variation of Statistical Characteristics of Noisy Signals

Authors: Farhad Asadi, Hossein Sadati

Abstract:

In fields such as neuroscience and especially in cognition modeling of mental processes, uncertainty processing in temporal zone of signal is vital. In this paper, Bayesian online inferences in estimation of change-points location in signal are constructed. This method separated the observed signal into independent series and studies the change and variation of the regime of data locally with related statistical characteristics. We give conditions on simulations of the method when the data characteristics of signals vary, and provide empirical evidence to show the performance of method. It is verified that correlation between series around the change point location and its characteristics such as Signal to Noise Ratios and mean value of signal has important factor on fluctuating in finding proper location of change point. And one of the main contributions of this study is related to representing of these influences of signal statistical characteristics for finding abrupt variation in signal. There are two different structures for simulations which in first case one abrupt change in temporal section of signal is considered with variable position and secondly multiple variations are considered. Finally, influence of statistical characteristic for changing the location of change point is explained in details in simulation results with different artificial signals.

Keywords: Time series, fluctuation in statistical characteristics, optimal learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
154 Experiment Study on the Influence of Tool Materials on the Drilling of Thick Stacked Plate of 2219 Aluminum Alloy

Authors: G. H. Li, M. Liu, H. J. Qi, Q. Zhu, W. Z. He

Abstract:

The drilling and riveting processes are widely used in the assembly of carrier rocket, which makes the efficiency and quality of drilling become the important factor affecting the assembly process. According to the problem existing in the drilling of thick stacked plate (thickness larger than 10mm) of carrier rocket, such as drill break, large noise and burr etc., experimental study of the influence of tool material on the drilling was carried out. The cutting force was measured by a piezoelectric dynamometer, the aperture was measured with an outline projector, and the burr is observed and measured by a digital stereo microscope. Through the measurement, the effects of tool material on the drilling were analyzed from the aspects of drilling force, diameter, and burr. The results show that, compared with carbide drill and coated carbide one, the drilling force of high speed steel is larger. But, the application of high speed steel also has some advantages, e.g. a higher number of hole can be obtained, the height of burr is small, the exit is smooth and the slim burr is less, and the tool experiences wear but not fracture. Therefore, the high speed steel tool is suitable for the drilling of thick stacked plate of 2219 Aluminum alloy.

Keywords: 2219 aluminum alloy, thick stacked plate, drilling, tool material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1283
153 Laser Ultrasonic Imaging Based on Synthetic Aperture Focusing Technique Algorithm

Authors: Sundara Subramanian Karuppasamy, Che Hua Yang

Abstract:

In this work, the laser ultrasound technique has been used for analyzing and imaging the inner defects in metal blocks. To detect the defects in blocks, traditionally the researchers used piezoelectric transducers for the generation and reception of ultrasonic signals. These transducers can be configured into the sparse and phased array. But these two configurations have their drawbacks including the requirement of many transducers, time-consuming calculations, limited bandwidth, and provide confined image resolution. Here, we focus on the non-contact method for generating and receiving the ultrasound to examine the inner defects in aluminum blocks. A Q-switched pulsed laser has been used for the generation and the reception is done by using Laser Doppler Vibrometer (LDV). Based on the Doppler effect, LDV provides a rapid and high spatial resolution way for sensing ultrasonic waves. From the LDV, a series of scanning points are selected which serves as the phased array elements. The side-drilled hole of 10 mm diameter with a depth of 25 mm has been introduced and the defect is interrogated by the linear array of scanning points obtained from the LDV. With the aid of the Synthetic Aperture Focusing Technique (SAFT) algorithm, based on the time-shifting principle the inspected images are generated from the A-scan data acquired from the 1-D linear phased array elements. Thus the defect can be precisely detected with good resolution.

Keywords: Laser ultrasonics, linear phased array, nondestructive testing, synthetic aperture focusing technique, ultrasonic imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
152 Developing a New Vibration Analysis Calculative Method for Esfahan Subway Train and Railways Design, Manufacturing, and Construction

Authors: Omid A. Zargar

Abstract:

The simulated mass and spring method evaluation for subway or railways construction and installation systems have a wide application in rail industries. This kind of design should be optimizing all related parameters to reduce the amount of vibration in cities, homelands, historical zones and other critical locations. Finite element method could help us a lot to analysis such applications with an excellent accuracy but always developing some simple, fast and user friendly evaluation method required in subway industrial applications. In addition, process parameter optimization extremely required in railway industries to achieve some optimal design of railways with maximum safety, reliability and performance. Furthermore, it is important to reduce vibrations and further related maintenance costs as well as possible. In this paper a simple but useful simulated mass and spring evaluation system developed for Esfahan subway construction. Besides, some of related recent patent and innovations in rail world industries like Suspension mass tuned vibration reducer, short sleeper vibration attenuation fastener and Airtight track vibration-noise reducing fastener discussed in details.

Keywords: Subway construction engineering, natural frequency, operation frequency, vibration analysis, polyurethane layer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2359
151 Spatial-Temporal Clustering Characteristics of Dengue in the Northern Region of Sri Lanka, 2010-2013

Authors: Sumiko Anno, Keiji Imaoka, Takeo Tadono, Tamotsu Igarashi, Subramaniam Sivaganesh, Selvam Kannathasan, Vaithehi Kumaran, Sinnathamby Noble Surendran

Abstract:

Dengue outbreaks are affected by biological, ecological, socio-economic and demographic factors that vary over time and space. These factors have been examined separately and still require systematic clarification. The present study aimed to investigate the spatial-temporal clustering relationships between these factors and dengue outbreaks in the northern region of Sri Lanka. Remote sensing (RS) data gathered from a plurality of satellites were used to develop an index comprising rainfall, humidity and temperature data. RS data gathered by ALOS/AVNIR-2 were used to detect urbanization, and a digital land cover map was used to extract land cover information. Other data on relevant factors and dengue outbreaks were collected through institutions and extant databases. The analyzed RS data and databases were integrated into geographic information systems, enabling temporal analysis, spatial statistical analysis and space-time clustering analysis. Our present results showed that increases in the number of the combination of ecological factor and socio-economic and demographic factors with above the average or the presence contribute to significantly high rates of space-time dengue clusters.

Keywords: ALOS/AVNIR-2, Dengue, Space-time clustering analysis, Sri Lanka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284
150 Accuracy of Autonomy Navigation of Unmanned Aircraft Systems through Imagery

Authors: Sidney A. Lima, Hermann J. H. Kux, Elcio H. Shiguemori

Abstract:

The Unmanned Aircraft Systems (UAS) usually navigate through the Global Navigation Satellite System (GNSS) associated with an Inertial Navigation System (INS). However, GNSS can have its accuracy degraded at any time or even turn off the signal of GNSS. In addition, there is the possibility of malicious interferences, known as jamming. Therefore, the image navigation system can solve the autonomy problem, because if the GNSS is disabled or degraded, the image navigation system would continue to provide coordinate information for the INS, allowing the autonomy of the system. This work aims to evaluate the accuracy of the positioning though photogrammetry concepts. The methodology uses orthophotos and Digital Surface Models (DSM) as a reference to represent the object space and photograph obtained during the flight to represent the image space. For the calculation of the coordinates of the perspective center and camera attitudes, it is necessary to know the coordinates of homologous points in the object space (orthophoto coordinates and DSM altitude) and image space (column and line of the photograph). So if it is possible to automatically identify in real time the homologous points the coordinates and attitudes can be calculated whit their respective accuracies. With the methodology applied in this work, it is possible to verify maximum errors in the order of 0.5 m in the positioning and 0.6º in the attitude of the camera, so the navigation through the image can reach values equal to or higher than the GNSS receivers without differential correction. Therefore, navigating through the image is a good alternative to enable autonomous navigation.

Keywords: Autonomy, navigation, security, photogrammetry, remote sensing, spatial resection, UAS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1321
149 Using Field Indices of Rill and Gully in order to Erosion Estimating and Sediment Analysis (Case Study: Menderjan Watershed in Isfahan Province, Iran)

Authors: Masoud Nasri, Sadat Feiznia, Mohammad Jafari, Hasan Ahmadi

Abstract:

Today, incorrect use of lands and land use changes, excessive grazing, no suitable using of agricultural farms, plowing on steep slopes, road construct, building construct, mine excavation etc have been caused increasing of soil erosion and sediment yield. For erosion and sediment estimation one can use statistical and empirical methods. This needs to identify land unit map and the map of effective factors. However, these empirical methods are usually time consuming and do not give accurate estimation of erosion. In this study, we applied GIS techniques to estimate erosion and sediment of Menderjan watershed at upstream Zayandehrud river in center of Iran. Erosion faces at each land unit were defined on the basis of land use, geology and land unit map using GIS. The UTM coordinates of each erosion type that showed more erosion amounts such as rills and gullies were inserted in GIS using GPS data. The frequency of erosion indicators at each land unit, land use and their sediment yield of these indices were calculated. Also using tendency analysis of sediment yield changes in watershed outlet (Menderjan hydrometric gauge station), was calculated related parameters and estimation errors. The results of this study according to implemented watershed management projects can be used for more rapid and more accurate estimation of erosion than traditional methods. These results can also be used for regional erosion assessment and can be used for remote sensing image processing.

Keywords: Erosion and sedimentation, Gully, Rill, GIS, GPS, Menderjan Watershed

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908
148 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2763