Search results for: Error probability
298 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-Franc¸ois Plante, Michel Gamache
Abstract:
This study presents the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs’ processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW’s ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. We employ gradient descent and backpropagation to train ML-IDW. The performance of the proposed model is compared against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. Our results highlight the efficacy of ML-IDW, particularly in handling complex spatial dataset, exhibiting lower mean square error in regression and higher F1 score in classification.
Keywords: Deep Learning, Multi-Layer Neural Networks, Gradient Descent, Spatial Interpolation, Inverse Distance Weighting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47297 Analysis of Residual Strain and Stress Distributions in High Speed Milled Specimens using an Indentation Method
Authors: Felipe V. Díaz, Claudio A. Mammana, Armando P. M. Guidobono, Raúl E. Bolmaro
Abstract:
Through a proper analysis of residual strain and stress distributions obtained at the surface of high speed milled specimens of AA 6082–T6 aluminium alloy, the performance of an improved indentation method is evaluated. This method integrates a special device of indentation to a universal measuring machine. The mentioned device allows introducing elongated indents allowing to diminish the absolute error of measurement. It must be noted that the present method offers the great advantage of avoiding both the specific equipment and highly qualified personnel, and their inherent high costs. In this work, the cutting tool geometry and high speed parameters are selected to introduce reduced plastic damage. Through the variation of the depth of cut, the stability of the shapes adopted by the residual strain and stress distributions is evaluated. The results show that the strain and stress distributions remain unchanged, compressive and small. Moreover, these distributions reveal a similar asymmetry when the gradients corresponding to conventional and climb cutting zones are compared.Keywords: Residual strain, residual stress, high speed milling, indentation methods, aluminium alloys.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624296 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include the cost of infrastructure, personnel, and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion, and various indirect costs in terms of air transport. This research aims to predict the probabilistic crash prediction of vehicles using Machine Learning due to natural and structural reasons by excluding spontaneous reasons, like overspeeding, etc., in the United States. These factors range from meteorological elements such as weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity, to human-made structures, like road structure components such as Bumps, Roundabouts, No Exit, Turning Loops, Give Away, etc. The probabilities are categorized into ten distinct classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes in all states collected by the US government. The probability of the crash was determined by employing Multinomial Expected Value, and a classification label was assigned accordingly. We applied three classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-depth insights through exploratory data analysis.
Keywords: Road safety, crash prediction, exploratory analysis, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88295 Assessment and Uncertainty Analysis of ROSA/LSTF Test on Pressurized Water Reactor 1.9% Vessel Upper Head Small-Break Loss-of-Coolant Accident
Authors: Takeshi Takeda
Abstract:
An experiment utilizing the ROSA/LSTF (rig of safety assessment/large-scale test facility) simulated a 1.9% vessel upper head small-break loss-of-coolant accident with an accident management (AM) measure under the total failure of high-pressure injection system of emergency core cooling system in a pressurized water reactor. Steam generator (SG) secondary-side depressurization on the AM measure was started by fully opening relief valves in both SGs when the maximum core exit temperature rose to 623 K. A large increase took place in the cladding surface temperature of simulated fuel rods on account of a late and slow response of core exit thermocouples during core boil-off. The author analyzed the LSTF test by reference to the matrix of an integral effect test for the validation of a thermal-hydraulic system code. Problems remained in predicting the primary coolant distribution and the core exit temperature with the RELAP5/MOD3.3 code. The uncertainty analysis results of the RELAP5 code confirmed that the sample size with respect to the order statistics influences the value of peak cladding temperature with a 95% probability at a 95% confidence level, and the Spearman’s rank correlation coefficient.
Keywords: LSTF, LOCA, uncertainty analysis, RELAP5.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729294 Trade Openness and Its Effects on Economic Growth in Selected South Asian Countries: A Panel Data Study
Authors: Samra Bajwa, Muhammad W. Siddiqi
Abstract:
The study investigates the causal link between trade openness and economic growth for four South Asian countries for period 1972-1985 and 1986-2007 to examine the scenario before and after the implementation of SAARC. Panel cointegration and FMOLS techniques are employed for short run and long run estimates. In 1972-85 short run unidirectional causality from GDP to openness is found whereas, in 1986-2007 there exists bi-directional causality between GDP and openness. The long run elasticity magnitude between GDP and openness contains negative sign in 1972-85 which shows that there exists long run negative relationship. While in time period 1986-2007 the elasticity magnitude has positive sign that indicates positive causation between GDP and openness. So it can be concluded that after the implementation of SAARC overall situation of selected countries got better. Also long run coefficient of error term suggests that short term equilibrium adjustments are driven by adjustment back to long run equilibrium.Keywords: Causality, Economic Growth, Panel Co-integration, SAARC, Trade Openness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2647293 Reconstitute Information about Discontinued Water Quality Variables in the Nile Delta Monitoring Network Using Two Record Extension Techniques
Authors: Bahaa Khalil, Taha B. M. J. Ouarda, André St-Hilaire
Abstract:
The world economic crises and budget constraints have caused authorities, especially those in developing countries, to rationalize water quality monitoring activities. Rationalization consists of reducing the number of monitoring sites, the number of samples, and/or the number of water quality variables measured. The reduction in water quality variables is usually based on correlation. If two variables exhibit high correlation, it is an indication that some of the information produced may be redundant. Consequently, one variable can be discontinued, and the other continues to be measured. Later, the ordinary least squares (OLS) regression technique is employed to reconstitute information about discontinued variable by using the continuously measured one as an explanatory variable. In this paper, two record extension techniques are employed to reconstitute information about discontinued water quality variables, the OLS and the Line of Organic Correlation (LOC). An empirical experiment is conducted using water quality records from the Nile Delta water quality monitoring network in Egypt. The record extension techniques are compared for their ability to predict different statistical parameters of the discontinued variables. Results show that the OLS is better at estimating individual water quality records. However, results indicate an underestimation of the variance in the extended records. The LOC technique is superior in preserving characteristics of the entire distribution and avoids underestimation of the variance. It is concluded from this study that the OLS can be used for the substitution of missing values, while LOC is preferable for inferring statements about the probability distribution.Keywords: Record extension, record augmentation, monitoringnetworks, water quality indicators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613292 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques
Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa
Abstract:
In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2587291 Macular Ganglion Cell Inner Plexiform Layer Thinning in Patients with Visual Field Defect that Respects the Vertical Meridian
Authors: Hye-Young Shin, Chan Kee Park
Abstract:
Background: To compare the thinning patterns of the ganglion cell-inner plexiform layer (GCIPL) and peripapillary retinal nerve fiber layer (pRNFL) as measured using Cirrus high-definition optical coherence tomography (HD-OCT) in patients with visual field (VF) defects that respect the vertical meridian. Methods: Twenty eyes of eleven patients with VF defects that respect the vertical meridian were enrolled retrospectively. The thicknesses of the macular GCIPL and pRNFL were measured using Cirrus HD-OCT. The 5% and 1% thinning area index (TAI) was calculated as the proportion of abnormally thin sectors at the 5% and 1% probability level within the area corresponding to the affected VF. The 5% and 1% TAI were compared between the GCIPL and pRNFL measurements. Results: The color-coded GCIPL deviation map showed a characteristic vertical thinning pattern of the GCIPL, which is also seen in the VF of patients with brain lesions. The 5% and 1% TAI were significantly higher in the GCIPL measurements than in the pRNFL measurements (all P < 0.01). Conclusions: Macular GCIPL analysis clearly visualized a characteristic topographic pattern of retinal ganglion cell (RGC) loss in patients with VF defects that respect the vertical meridian, unlike pRNFL measurements. Macular GCIPL measurements provide more valuable information than pRNFL measurements for detecting the loss of RGCs in patients with retrograde degeneration of the optic nerve fibers.Keywords: Brain lesion, Macular ganglion cell-Inner plexiform layer, Spectral-domain optical coherence tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773290 A Review and Comparative Analysis on Cluster Ensemble Methods
Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha
Abstract:
Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.
Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824289 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery
Authors: Seema Biday, Udhav Bhosle
Abstract:
Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981288 Adaptation Learning Speed Control for a High- Performance Induction Motor using Neural Networks
Authors: M. Zerikat, S. Chekroun
Abstract:
This paper proposes an effective adaptation learning algorithm based on artificial neural networks for speed control of an induction motor assumed to operate in a high-performance drives environment. The structure scheme consists of a neural network controller and an algorithm for changing the NN weights in order that the motor speed can accurately track of the reference command. This paper also makes uses a very realistic and practical scheme to estimate and adaptively learn the noise content in the speed load torque characteristic of the motor. The availability of the proposed controller is verified by through a laboratory implementation and under computation simulations with Matlab-software. The process is also tested for the tracking property using different types of reference signals. The performance and robustness of the proposed control scheme have evaluated under a variety of operating conditions of the induction motor drives. The obtained results demonstrate the effectiveness of the proposed control scheme system performances, both in steady state error in speed and dynamic conditions, was found to be excellent and those is not overshoot.Keywords: Electric drive, Induction motor, speed control, Adaptive control, neural network, High Performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029287 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA
Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem
Abstract:
A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2393286 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division
Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi
Abstract:
In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625285 Computer Aided Design of Reshaping Process of Circular Pipes into Square Pipes
Authors: Parviz Alinezhad, Ali Sanati, Koorosh Naser Momtahen
Abstract:
Square pipes (pipes with square cross sections) are being used for various industrial objectives, such as machine structure components and housing/building elements. The utilization of them is extending rapidly and widely. Hence, the out-put of those pipes is increasing and new application fields are continually developing. Due to various demands in recent time, the products have to satisfy difficult specifications with high accuracy in dimensions. The reshaping process design of pipes with square cross sections; however, is performed by trial and error and based on expert-s experience. In this paper, a computer-aided simulation is developed based on the 2-D elastic-plastic method with consideration of the shear deformation to analyze the reshaping process. Effect of various parameters such as diameter of the circular pipe and mechanical properties of metal on product dimension and quality can be evaluated by using this simulation. Moreover, design of reshaping process include determination of shrinkage of cross section, necessary number of stands, radius of rolls and height of pipe at each stand, are investigated. Further, it is shown that there are good agreements between the results of the design method and the experimental results.Keywords: Circular Pipes, Square Pipes, Shear Deformation, Reshaping Process, Numerical Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398284 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.
Keywords: Conduction, inverse problems, conjugated gradient method, laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845283 Simulation and Design of Single Fed Circularly Polarized Triangular Microstrip Antenna with Wide Band Tuning Stub
Authors: R. Irani, A. Ghavidel, F. Hodjat Kashani
Abstract:
Recently, several designs of single fed circularly polarized microstrip antennas have been studied. Relatively, a few designs for achieving circular polarization using triangular microstrip antenna are available. Typically existing design of single fed circularly polarized triangular microstrip antennas include the use of equilateral triangular patch with a slit or a horizontal slot on the patch or addition a narrow band stub on the edge or a vertex of triangular patch. In other word, with using a narrow band tune stub on middle of an edge of triangle causes of facility to compensate the possible fabrication error and substrate materials with easier adjusting the tuner stub length. Even though disadvantages of this method is very long of stub (approximate 1/3 length of triangle edge). In this paper, instead of narrow band stub, a wide band stub has been applied, therefore the length of stub by this method has been decreased around 1/10 edge of triangle in addition changing the aperture angle of stub, provides more facility for designing and producing circular polarization wave.Keywords: Circular polarization, Microstrip antenna, single feed, wide band stub.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011282 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.
Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346281 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise
Authors: J. P. Dubois, O. M. Abdul-Latif
Abstract:
Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538280 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor
Authors: Jinseon Song, Yongwan Park
Abstract:
In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462279 Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing
Authors: Zaineb Ben Messaoud, Dorra Gargouri, Saida Zribi, Ahmed Ben Hamida
Abstract:
This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.Keywords: Formants Estimation, HMM, Multi Band Spectral Subtraction, Variable order LPC coding, White Gauusien Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966278 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming
Authors: Phruksaphanrat B.
Abstract:
This research proposes a preemptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of makespan. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, preemptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions.
Keywords: Multi-mode resource constrained project scheduling problem, Fuzzy set, Goal programming, Preemptive fuzzy goal programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2758277 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: Annual power production, Black Sea, efficiency, power production performance, wave energy converter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664276 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks
Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing
Abstract:
The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2839275 Statically Fused Unbiased Converted Measurements Kalman Filter
Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou
Abstract:
Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).
Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 376274 Management of Air Pollutants from Point Sources
Authors: N. Lokeshwari, G. Srinikethan, V. S. Hegde
Abstract:
Monitoring is essential to assessing the effectiveness of air pollution control actions. The goal of the air quality information system is through monitoring, to keep authorities, major polluters and the public informed on the short and long-term changes in air quality, thereby helping to raise awareness. Mathematical models are the best tools available for the prediction of the air quality management. The main objective of the work was to apply a Model that predicts the concentration levels of different pollutants at any instant of time. In this study, distribution of air pollutants concentration such as nitrogen dioxides (NO2), sulphur dioxides (SO2) and total suspended particulates (TSP) of industries are determined by using Gaussian model. Besides that, the effect of wind speed and its direction on the pollutant concentration within the affected area were evaluated. In order to determine the efficiency and percentage of error in the modeling, validation process of data was done. Sampling of air quality was conducted in getting existing air quality around a factory and the concentrations of pollutants in a plume were inversely proportional to wind velocity. The resultant ground level concentrations were then compared to the quality standards to determine if there could be a negative impact on health. This study concludes that concentration of pollutants can be significantly predicted using Gaussian Model. The data base management is developed for the air data of Hubli-Dharwad region.
Keywords: DBMS, NO2, SO2, Wind rose plots.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033273 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry
Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich
Abstract:
The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.Keywords: Food industry, interferometric, oils, quality control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2181272 Comparison of ANFIS and ANN for Estimation of Biochemical Oxygen Demand Parameter in Surface Water
Authors: S. Areerachakul
Abstract:
Nowadays, several techniques such as; Fuzzy Inference System (FIS) and Neural Network (NN) are employed for developing of the predictive models to estimate parameters of water quality. The main objective of this study is to compare between the predictive ability of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model and Artificial Neural Network (ANN) model to estimate the Biochemical Oxygen Demand (BOD) on data from 11 sampling sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2004-2011. The five parameters of water quality namely Dissolved Oxygen (DO), Chemical Oxygen Demand (COD), Ammonia Nitrogen (NH3N), Nitrate Nitrogen (NO3N), and Total Coliform bacteria (T-coliform) are used as the input of the models. These water quality indices affect the biochemical oxygen demand. The experimental results indicate that the ANN model provides a higher correlation coefficient (R=0.73) and a lower root mean square error (RMSE=4.53) than the corresponding ANFIS model.Keywords: adaptive neuro-fuzzy inference system, artificial neural network, biochemical oxygen demand, surface water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2529271 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform
Authors: Ashagrie Getnet Flattie
Abstract:
Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.Keywords: BER, LTE, MIMO, path loss, UAV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393270 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA
Authors: Deepti Tamrakar, Pritee Khanna
Abstract:
This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744269 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach
Authors: Ejaz Khan, Conor Heneghan
Abstract:
In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1713