Search results for: error compensation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1373

Search results for: error compensation

263 Adaptive Control Strategy of Robot Polishing Force Based on Position Impedance

Authors: Wang Zhan-Xi, Zhang Yi-Ming, Chen Hang, Wang Gang

Abstract:

Manual polishing has problems such as high labor intensity, low production efficiency and difficulty in guaranteeing the consistency of polishing quality. The use of robot polishing instead of manual polishing can effectively avoid these problems. Polishing force directly affects the quality of polishing, so accurate tracking and control of polishing force is one of the most important conditions for improving the accuracy of robot polishing. The traditional force control strategy is difficult to adapt to the strong coupling of force control and position control during the robot polishing process. Therefore, based on the analysis of force-based impedance control and position-based impedance control, this paper proposed a type of adaptive controller. Based on force feedback control of active compliance control, the controller can adaptively estimate the stiffness and position of the external environment and eliminate the steady-state force error produced by traditional impedance control. The simulation results of the model show that the adaptive controller has good adaptability to changing environmental positions and environmental stiffness, and can accurately track and control polishing force.

Keywords: robot polishing, force feedback, impedance control, adaptive control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 558
262 Auto Tuning PID Controller based on Improved Genetic Algorithm for Reverse Osmosis Plant

Authors: Jin-Sung Kim, Jin-Hwan Kim, Ji-Mo Park, Sung-Man Park, Won-Yong Choe, Hoon Heo

Abstract:

An optimal control of Reverse Osmosis (RO) plant is studied in this paper utilizing the auto tuning concept in conjunction with PID controller. A control scheme composing an auto tuning stochastic technique based on an improved Genetic Algorithm (GA) is proposed. For better evaluation of the process in GA, objective function defined newly in sense of root mean square error has been used. Also in order to achieve better performance of GA, more pureness and longer period of random number generation in operation are sought. The main improvement is made by replacing the uniform distribution random number generator in conventional GA technique to newly designed hybrid random generator composed of Cauchy distribution and linear congruential generator, which provides independent and different random numbers at each individual steps in Genetic operation. The performance of newly proposed GA tuned controller is compared with those of conventional ones via simulation.

Keywords: Genetic Algorithm, Auto tuning, Hybrid random number generator, Reverse Osmosis, PID controller

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3084
261 A Grid Synchronization Method Based on Adaptive Notch Filter for SPV System with Modified MPPT

Authors: Priyanka Chaudhary, M. Rizwan

Abstract:

This paper presents a grid synchronization technique based on adaptive notch filter for SPV (Solar Photovoltaic) system along with MPPT (Maximum Power Point Tracking) techniques. An efficient grid synchronization technique offers proficient detection of various components of grid signal like phase and frequency. It also acts as a barrier for harmonics and other disturbances in grid signal. A reference phase signal synchronized with the grid voltage is provided by the grid synchronization technique to standardize the system with grid codes and power quality standards. Hence, grid synchronization unit plays important role for grid connected SPV systems. As the output of the PV array is fluctuating in nature with the meteorological parameters like irradiance, temperature, wind etc. In order to maintain a constant DC voltage at VSC (Voltage Source Converter) input, MPPT control is required to track the maximum power point from PV array. In this work, a variable step size P & O (Perturb and Observe) MPPT technique with DC/DC boost converter has been used at first stage of the system. This algorithm divides the dPpv/dVpv curve of PV panel into three separate zones i.e. zone 0, zone 1 and zone 2. A fine value of tracking step size is used in zone 0 while zone 1 and zone 2 requires a large value of step size in order to obtain a high tracking speed. Further, adaptive notch filter based control technique is proposed for VSC in PV generation system. Adaptive notch filter (ANF) approach is used to synchronize the interfaced PV system with grid to maintain the amplitude, phase and frequency parameters as well as power quality improvement. This technique offers the compensation of harmonics current and reactive power with both linear and nonlinear loads. To maintain constant DC link voltage a PI controller is also implemented and presented in this paper. The complete system has been designed, developed and simulated using SimPower System and Simulink toolbox of MATLAB. The performance analysis of three phase grid connected solar photovoltaic system has been carried out on the basis of various parameters like PV output power, PV voltage, PV current, DC link voltage, PCC (Point of Common Coupling) voltage, grid voltage, grid current, voltage source converter current, power supplied by the voltage source converter etc. The results obtained from the proposed system are found satisfactory.

Keywords: Solar photovoltaic systems, MPPT, voltage source converter, grid synchronization technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
260 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.

Keywords: Iot, activity recognition, automatic classification, unconstrained environment, deep neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1096
259 Monthly River Flow Prediction Using a Nonlinear Prediction Method

Authors: N. H. Adenan, M. S. M. Noorani

Abstract:

River flow prediction is an essential tool to ensure proper management of water resources and the optimal distribution of water to consumers. This study presents an analysis and prediction by using nonlinear prediction method with monthly river flow data for Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The reconstruction of phase space involves the reconstruction of one-dimension (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. The revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) was employed to compare prediction performance for the nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show that the prediction results using the nonlinear prediction method are better than ARIMA and SVM. Therefore, the results of this study could be used to develop an efficient water management system to optimize the allocation of water resources.

Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
258 Analysis of Residual Strain and Stress Distributions in High Speed Milled Specimens using an Indentation Method

Authors: Felipe V. Díaz, Claudio A. Mammana, Armando P. M. Guidobono, Raúl E. Bolmaro

Abstract:

Through a proper analysis of residual strain and stress distributions obtained at the surface of high speed milled specimens of AA 6082–T6 aluminium alloy, the performance of an improved indentation method is evaluated. This method integrates a special device of indentation to a universal measuring machine. The mentioned device allows introducing elongated indents allowing to diminish the absolute error of measurement. It must be noted that the present method offers the great advantage of avoiding both the specific equipment and highly qualified personnel, and their inherent high costs. In this work, the cutting tool geometry and high speed parameters are selected to introduce reduced plastic damage. Through the variation of the depth of cut, the stability of the shapes adopted by the residual strain and stress distributions is evaluated. The results show that the strain and stress distributions remain unchanged, compressive and small. Moreover, these distributions reveal a similar asymmetry when the gradients corresponding to conventional and climb cutting zones are compared.

Keywords: Residual strain, residual stress, high speed milling, indentation methods, aluminium alloys.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
257 Trade Openness and Its Effects on Economic Growth in Selected South Asian Countries: A Panel Data Study

Authors: Samra Bajwa, Muhammad W. Siddiqi

Abstract:

The study investigates the causal link between trade openness and economic growth for four South Asian countries for period 1972-1985 and 1986-2007 to examine the scenario before and after the implementation of SAARC. Panel cointegration and FMOLS techniques are employed for short run and long run estimates. In 1972-85 short run unidirectional causality from GDP to openness is found whereas, in 1986-2007 there exists bi-directional causality between GDP and openness. The long run elasticity magnitude between GDP and openness contains negative sign in 1972-85 which shows that there exists long run negative relationship. While in time period 1986-2007 the elasticity magnitude has positive sign that indicates positive causation between GDP and openness. So it can be concluded that after the implementation of SAARC overall situation of selected countries got better. Also long run coefficient of error term suggests that short term equilibrium adjustments are driven by adjustment back to long run equilibrium.

Keywords: Causality, Economic Growth, Panel Co-integration, SAARC, Trade Openness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2602
256 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa

Abstract:

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2547
255 A Review and Comparative Analysis on Cluster Ensemble Methods

Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha

Abstract:

Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.

Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 751
254 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Authors: Seema Biday, Udhav Bhosle

Abstract:

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
253 Adaptation Learning Speed Control for a High- Performance Induction Motor using Neural Networks

Authors: M. Zerikat, S. Chekroun

Abstract:

This paper proposes an effective adaptation learning algorithm based on artificial neural networks for speed control of an induction motor assumed to operate in a high-performance drives environment. The structure scheme consists of a neural network controller and an algorithm for changing the NN weights in order that the motor speed can accurately track of the reference command. This paper also makes uses a very realistic and practical scheme to estimate and adaptively learn the noise content in the speed load torque characteristic of the motor. The availability of the proposed controller is verified by through a laboratory implementation and under computation simulations with Matlab-software. The process is also tested for the tracking property using different types of reference signals. The performance and robustness of the proposed control scheme have evaluated under a variety of operating conditions of the induction motor drives. The obtained results demonstrate the effectiveness of the proposed control scheme system performances, both in steady state error in speed and dynamic conditions, was found to be excellent and those is not overshoot.

Keywords: Electric drive, Induction motor, speed control, Adaptive control, neural network, High Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
252 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA

Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem

Abstract:

A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.

Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2348
251 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division

Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.

Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585
250 Computer Aided Design of Reshaping Process of Circular Pipes into Square Pipes

Authors: Parviz Alinezhad, Ali Sanati, Koorosh Naser Momtahen

Abstract:

Square pipes (pipes with square cross sections) are being used for various industrial objectives, such as machine structure components and housing/building elements. The utilization of them is extending rapidly and widely. Hence, the out-put of those pipes is increasing and new application fields are continually developing. Due to various demands in recent time, the products have to satisfy difficult specifications with high accuracy in dimensions. The reshaping process design of pipes with square cross sections; however, is performed by trial and error and based on expert-s experience. In this paper, a computer-aided simulation is developed based on the 2-D elastic-plastic method with consideration of the shear deformation to analyze the reshaping process. Effect of various parameters such as diameter of the circular pipe and mechanical properties of metal on product dimension and quality can be evaluated by using this simulation. Moreover, design of reshaping process include determination of shrinkage of cross section, necessary number of stands, radius of rolls and height of pipe at each stand, are investigated. Further, it is shown that there are good agreements between the results of the design method and the experimental results.

Keywords: Circular Pipes, Square Pipes, Shear Deformation, Reshaping Process, Numerical Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
249 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: Conduction, inverse problems, conjugated gradient method, laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801
248 Simulation and Design of Single Fed Circularly Polarized Triangular Microstrip Antenna with Wide Band Tuning Stub

Authors: R. Irani, A. Ghavidel, F. Hodjat Kashani

Abstract:

Recently, several designs of single fed circularly polarized microstrip antennas have been studied. Relatively, a few designs for achieving circular polarization using triangular microstrip antenna are available. Typically existing design of single fed circularly polarized triangular microstrip antennas include the use of equilateral triangular patch with a slit or a horizontal slot on the patch or addition a narrow band stub on the edge or a vertex of triangular patch. In other word, with using a narrow band tune stub on middle of an edge of triangle causes of facility to compensate the possible fabrication error and substrate materials with easier adjusting the tuner stub length. Even though disadvantages of this method is very long of stub (approximate 1/3 length of triangle edge). In this paper, instead of narrow band stub, a wide band stub has been applied, therefore the length of stub by this method has been decreased around 1/10 edge of triangle in addition changing the aperture angle of stub, provides more facility for designing and producing circular polarization wave.

Keywords: Circular polarization, Microstrip antenna, single feed, wide band stub.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1971
247 Reliability Evaluation of Composite Electric Power System Based On Latin Hypercube Sampling

Authors: R. Ashok Bakkiyaraj, N. Kumarappan

Abstract:

This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.

Keywords: Composite power system, Latin Hypercube sampling, Monte Carlo simulation, Reliability evaluation, Variance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3069
246 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
245 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.

Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
244 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
243 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming

Authors: Phruksaphanrat B.

Abstract:

This research proposes a preemptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of makespan. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, preemptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions. 

Keywords: Multi-mode resource constrained project scheduling problem, Fuzzy set, Goal programming, Preemptive fuzzy goal programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2729
242 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing

Abstract:

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2798
241 Management of Air Pollutants from Point Sources

Authors: N. Lokeshwari, G. Srinikethan, V. S. Hegde

Abstract:

Monitoring is essential to assessing the effectiveness of air pollution control actions. The goal of the air quality information system is through monitoring, to keep authorities, major polluters and the public informed on the short and long-term changes in air quality, thereby helping to raise awareness. Mathematical models are the best tools available for the prediction of the air quality management. The main objective of the work was to apply a Model that predicts the concentration levels of different pollutants at any instant of time. In this study, distribution of air pollutants concentration such as nitrogen dioxides (NO2), sulphur dioxides (SO2) and total suspended particulates (TSP) of industries are determined by using Gaussian model. Besides that, the effect of wind speed and its direction on the pollutant concentration within the affected area were evaluated. In order to determine the efficiency and percentage of error in the modeling, validation process of data was done. Sampling of air quality was conducted in getting existing air quality around a factory and the concentrations of pollutants in a plume were inversely proportional to wind velocity. The resultant ground level concentrations were then compared to the quality standards to determine if there could be a negative impact on health. This study concludes that concentration of pollutants can be significantly predicted using Gaussian Model. The data base management is developed for the air data of Hubli-Dharwad region.

Keywords: DBMS, NO2, SO2, Wind rose plots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
240 Comparison of ANFIS and ANN for Estimation of Biochemical Oxygen Demand Parameter in Surface Water

Authors: S. Areerachakul

Abstract:

Nowadays, several techniques such as; Fuzzy Inference System (FIS) and Neural Network (NN) are employed for developing of the predictive models to estimate parameters of water quality. The main objective of this study is to compare between the predictive ability of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model and Artificial Neural Network (ANN) model to estimate the Biochemical Oxygen Demand (BOD) on data from 11 sampling sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2004-2011. The five parameters of water quality namely Dissolved Oxygen (DO), Chemical Oxygen Demand (COD), Ammonia Nitrogen (NH3N), Nitrate Nitrogen (NO3N), and Total Coliform bacteria (T-coliform) are used as the input of the models. These water quality indices affect the biochemical oxygen demand. The experimental results indicate that the ANN model provides a higher correlation coefficient (R=0.73) and a lower root mean square error (RMSE=4.53) than the corresponding ANFIS model.

Keywords: adaptive neuro-fuzzy inference system, artificial neural network, biochemical oxygen demand, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2483
239 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform

Authors: Ashagrie Getnet Flattie

Abstract:

Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.

Keywords: BER, LTE, MIMO, path loss, UAV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
238 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

Authors: Deepti Tamrakar, Pritee Khanna

Abstract:

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
237 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

Authors: Ejaz Khan, Conor Heneghan

Abstract:

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.

Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666
236 Regionalization of IDF Curves with L-Moments for Storm Events

Authors: Noratiqah Mohd Ariff, Abdul Aziz Jemain, Mohd Aftar Abu Bakar

Abstract:

The construction of Intensity-Duration-Frequency (IDF) curves is one of the most common and useful tools in order to design hydraulic structures and to provide a mathematical relationship between rainfall characteristics. IDF curves, especially those in Peninsular Malaysia, are often built using moving windows of rainfalls. However, these windows do not represent the actual rainfall events since the duration of rainfalls is usually prefixed. Hence, instead of using moving windows, this study aims to find regionalized distributions for IDF curves of extreme rainfalls based on storm events. Homogeneity test is performed on annual maximum of storm intensities to identify homogeneous regions of storms in Peninsular Malaysia. The L-moment method is then used to regionalized Generalized Extreme Value (GEV) distribution of these annual maximums and subsequently. IDF curves are constructed using the regional distributions. The differences between the IDF curves obtained and IDF curves found using at-site GEV distributions are observed through the computation of the coefficient of variation of root mean square error, mean percentage difference and the coefficient of determination. The small differences implied that the construction of IDF curves could be simplified by finding a general probability distribution of each region. This will also help in constructing IDF curves for sites with no rainfall station.

Keywords: IDF curves, L-moments, regionalization, storm events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
235 Optical Signal-To-Noise Ratio Monitoring Based on Delay Tap Sampling Using Artificial Neural Network

Authors: Feng Wang, Shencheng Ni, Shuying Han, Shanhong You

Abstract:

With the development of optical communication, optical performance monitoring (OPM) has received more and more attentions. Since optical signal-to-noise ratio (OSNR) is directly related to bit error rate (BER), it is one of the important parameters in optical networks. Recently, artificial neural network (ANN) has been greatly developed. ANN has strong learning and generalization ability. In this paper, a method of OSNR monitoring based on delay-tap sampling (DTS) and ANN has been proposed. DTS technique is used to extract the eigenvalues of the signal. Then, the eigenvalues are input into the ANN to realize the OSNR monitoring. The experiments of 10 Gb/s non-return-to-zero (NRZ) on–off keying (OOK), 20 Gb/s pulse amplitude modulation (PAM4) and 20 Gb/s return-to-zero (RZ) differential phase-shift keying (DPSK) systems are demonstrated for the OSNR monitoring based on the proposed method. The experimental results show that the range of OSNR monitoring is from 15 to 30 dB and the root-mean-square errors (RMSEs) for 10 Gb/s NRZ-OOK, 20 Gb/s PAM4 and 20 Gb/s RZ-DPSK systems are 0.36 dB, 0.45 dB and 0.48 dB respectively. The impact of chromatic dispersion (CD) on the accuracy of OSNR monitoring is also investigated in the three experimental systems mentioned above.

Keywords: Artificial neural network, ANN, chromatic dispersion, delay-tap sampling, optical signal-to-noise ratio, OSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 650
234 Monitoring Blood Pressure Using Regression Techniques

Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim

Abstract:

Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.

Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 632