Search results for: trial and error.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1385

Search results for: trial and error.

275 Trade Openness and Its Effects on Economic Growth in Selected South Asian Countries: A Panel Data Study

Authors: Samra Bajwa, Muhammad W. Siddiqi

Abstract:

The study investigates the causal link between trade openness and economic growth for four South Asian countries for period 1972-1985 and 1986-2007 to examine the scenario before and after the implementation of SAARC. Panel cointegration and FMOLS techniques are employed for short run and long run estimates. In 1972-85 short run unidirectional causality from GDP to openness is found whereas, in 1986-2007 there exists bi-directional causality between GDP and openness. The long run elasticity magnitude between GDP and openness contains negative sign in 1972-85 which shows that there exists long run negative relationship. While in time period 1986-2007 the elasticity magnitude has positive sign that indicates positive causation between GDP and openness. So it can be concluded that after the implementation of SAARC overall situation of selected countries got better. Also long run coefficient of error term suggests that short term equilibrium adjustments are driven by adjustment back to long run equilibrium.

Keywords: Causality, Economic Growth, Panel Co-integration, SAARC, Trade Openness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2646
274 A Hybrid Feature Selection by Resampling, Chi squared and Consistency Evaluation Techniques

Authors: Amir-Massoud Bidgoli, Mehdi Naseri Parsa

Abstract:

In this paper a combined feature selection method is proposed which takes advantages of sample domain filtering, resampling and feature subset evaluation methods to reduce dimensions of huge datasets and select reliable features. This method utilizes both feature space and sample domain to improve the process of feature selection and uses a combination of Chi squared with Consistency attribute evaluation methods to seek reliable features. This method consists of two phases. The first phase filters and resamples the sample domain and the second phase adopts a hybrid procedure to find the optimal feature space by applying Chi squared, Consistency subset evaluation methods and genetic search. Experiments on various sized datasets from UCI Repository of Machine Learning databases show that the performance of five classifiers (Naïve Bayes, Logistic, Multilayer Perceptron, Best First Decision Tree and JRIP) improves simultaneously and the classification error for these classifiers decreases considerably. The experiments also show that this method outperforms other feature selection methods.

Keywords: feature selection, resampling, reliable features, Consistency Subset Evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2584
273 A Review and Comparative Analysis on Cluster Ensemble Methods

Authors: S. Sarumathi, P. Ranjetha, C. Saraswathy, M. Vaishnavi, S. Geetha

Abstract:

Clustering is an unsupervised learning technique for aggregating data objects into meaningful classes so that intra cluster similarity is maximized and inter cluster similarity is minimized in data mining. However, no single clustering algorithm proves to be the most effective in producing the best result. As a result, a new challenging technique known as the cluster ensemble approach has blossomed in order to determine the solution to this problem. For the cluster analysis issue, this new technique is a successful approach. The cluster ensemble's main goal is to combine similar clustering solutions in a way that achieves the precision while also improving the quality of individual data clustering. Because of the massive and rapid creation of new approaches in the field of data mining, the ongoing interest in inventing novel algorithms necessitates a thorough examination of current techniques and future innovation. This paper presents a comparative analysis of various cluster ensemble approaches, including their methodologies, formal working process, and standard accuracy and error rates. As a result, the society of clustering practitioners will benefit from this exploratory and clear research, which will aid in determining the most appropriate solution to the problem at hand.

Keywords: Clustering, cluster ensemble methods, consensus function, data mining, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 822
272 Relative Radiometric Correction of Cloudy Multitemporal Satellite Imagery

Authors: Seema Biday, Udhav Bhosle

Abstract:

Repeated observation of a given area over time yields potential for many forms of change detection analysis. These repeated observations are confounded in terms of radiometric consistency due to changes in sensor calibration over time, differences in illumination, observation angles and variation in atmospheric effects. This paper demonstrates applicability of an empirical relative radiometric normalization method to a set of multitemporal cloudy images acquired by Resourcesat1 LISS III sensor. Objective of this study is to detect and remove cloud cover and normalize an image radiometrically. Cloud detection is achieved by using Average Brightness Threshold (ABT) algorithm. The detected cloud is removed and replaced with data from another images of the same area. After cloud removal, the proposed normalization method is applied to reduce the radiometric influence caused by non surface factors. This process identifies landscape elements whose reflectance values are nearly constant over time, i.e. the subset of non-changing pixels are identified using frequency based correlation technique. The quality of radiometric normalization is statistically assessed by R2 value and mean square error (MSE) between each pair of analogous band.

Keywords: Correlation, Frequency domain, Multitemporal, Relative Radiometric Correction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
271 Adaptation Learning Speed Control for a High- Performance Induction Motor using Neural Networks

Authors: M. Zerikat, S. Chekroun

Abstract:

This paper proposes an effective adaptation learning algorithm based on artificial neural networks for speed control of an induction motor assumed to operate in a high-performance drives environment. The structure scheme consists of a neural network controller and an algorithm for changing the NN weights in order that the motor speed can accurately track of the reference command. This paper also makes uses a very realistic and practical scheme to estimate and adaptively learn the noise content in the speed load torque characteristic of the motor. The availability of the proposed controller is verified by through a laboratory implementation and under computation simulations with Matlab-software. The process is also tested for the tracking property using different types of reference signals. The performance and robustness of the proposed control scheme have evaluated under a variety of operating conditions of the induction motor drives. The obtained results demonstrate the effectiveness of the proposed control scheme system performances, both in steady state error in speed and dynamic conditions, was found to be excellent and those is not overshoot.

Keywords: Electric drive, Induction motor, speed control, Adaptive control, neural network, High Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2027
270 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA

Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem

Abstract:

A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.

Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
269 Fast Search Method for Large Video Database Using Histogram Features and Temporal Division

Authors: Feifei Lee, Qiu Chen, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved fast search algorithm using combined histogram features and temporal division method for short MPEG video clips from large video database. There are two types of histogram features used to generate more robust features. The first one is based on the adjacent pixel intensity difference quantization (APIDQ) algorithm, which had been reliably applied to human face recognition previously. An APIDQ histogram is utilized as the feature vector of the frame image. Another one is ordinal feature which is robust to color distortion. Combined with active search [4], a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by 6 hours of video to search for given 200 MPEG video clips which each length is 30 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 120ms, and Equal Error Rate (ERR) of 1% is achieved, which is more accurately and robust than conventional fast video search algorithm.

Keywords: Fast search, Adjacent pixel intensity differencequantization (APIDQ), DC image, Histogram feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1625
268 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: Conduction, inverse problems, conjugated gradient method, laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 844
267 Simulation and Design of Single Fed Circularly Polarized Triangular Microstrip Antenna with Wide Band Tuning Stub

Authors: R. Irani, A. Ghavidel, F. Hodjat Kashani

Abstract:

Recently, several designs of single fed circularly polarized microstrip antennas have been studied. Relatively, a few designs for achieving circular polarization using triangular microstrip antenna are available. Typically existing design of single fed circularly polarized triangular microstrip antennas include the use of equilateral triangular patch with a slit or a horizontal slot on the patch or addition a narrow band stub on the edge or a vertex of triangular patch. In other word, with using a narrow band tune stub on middle of an edge of triangle causes of facility to compensate the possible fabrication error and substrate materials with easier adjusting the tuner stub length. Even though disadvantages of this method is very long of stub (approximate 1/3 length of triangle edge). In this paper, instead of narrow band stub, a wide band stub has been applied, therefore the length of stub by this method has been decreased around 1/10 edge of triangle in addition changing the aperture angle of stub, provides more facility for designing and producing circular polarization wave.

Keywords: Circular polarization, Microstrip antenna, single feed, wide band stub.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
266 Development of an Impregnated Diamond Bit with an Improved Rate of Penetration

Authors: Tim Dunne, Weicheng Li, Chris Cheng, Qi Peng

Abstract:

Deeper petroleum reservoirs are more challenging to exploit due to the high hardness and abrasive characteristics of the formations. A cutting structure that consists of particulate diamond impregnated in a supporting matrix is found to be effective. Diamond impregnated bits are favored in these applications due to the higher thermal stability of the matrix material. The diamond particles scour or abrade away concentric grooves while the rock formation adjacent to the grooves is fractured and removed. The matrix material supporting the diamond will wear away, leaving the superficial dull diamonds to fall out. The matrix material wear will expose other embedded intact sharp diamonds to continue the operation. Minimizing the erosion effect on the matrix is an important design consideration, as the life of the bit can be extended by preventing early diamond pull-out. A careful balancing of the key parameters, such as diamond concentration, tungsten carbide and metal binder must be considered during development. Described herein is the design of experiment for developing and lab testing 8 unique samples. ASTM B611 wear testing was performed to benchmark the material performance against baseline products, with further scanning electron microscopy and microhardness evaluations. The recipe S5 with diamond 25/35 mesh size, narrow size distribution, high concentration blended with fine tungsten carbide and Co-Cu-Fe-P metal binder has the best performance, which shows 19% improvement in the ASTM B611 wear test compared with the reference material. In the field trial, the rate of penetration (ROP) is measured as 15 m/h, compared to 9.5, 7.8, and 6.8 m/h of other commercial impregnated bits in the same formation. A second round of optimizing recipe S5 for a higher wear resistance is further reported.

Keywords: Diamond containing material, grit hot press insert, impregnated diamond, insert, rate of penetration, ultrahard formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374
265 Reliability Evaluation of Composite Electric Power System Based On Latin Hypercube Sampling

Authors: R. Ashok Bakkiyaraj, N. Kumarappan

Abstract:

This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.

Keywords: Composite power system, Latin Hypercube sampling, Monte Carlo simulation, Reliability evaluation, Variance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3109
264 Detection of Ultrasonic Images in the Presence of a Random Number of Scatterers: A Statistical Learning Approach

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.

Keywords: LS-SVM, medical ultrasound imaging, partially developed speckle, multi-look model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342
263 SVM-Based Detection of SAR Images in Partially Developed Speckle Noise

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of SAR (synthetic aperture radar) images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to real SAR images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected SAR images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (the detection hypotheses) in the original images.

Keywords: Least Square-Support Vector Machine, SyntheticAperture Radar. Partially Developed Speckle, Multi-Look Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
262 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: Positioning, Distance, Camera, Features, SURF (Speed-Up Robust Features), Database, Estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1460
261 Multi-Objective Multi-Mode Resource-Constrained Project Scheduling Problem by Preemptive Fuzzy Goal Programming

Authors: Phruksaphanrat B.

Abstract:

This research proposes a preemptive fuzzy goal programming model for multi-objective multi-mode resource constrained project scheduling problem. The objectives of the problem are minimization of the total time and the total cost of the project. Objective in a multi-mode resource-constrained project scheduling problem is often a minimization of makespan. However, both time and cost should be considered at the same time with different level of important priorities. Moreover, all elements of cost functions in a project are not included in the conventional cost objective function. Incomplete total project cost causes an error in finding the project scheduling time. In this research, preemptive fuzzy goal programming is presented to solve the multi-objective multi-mode resource constrained project scheduling problem. It can find the compromise solution of the problem. Moreover, it is also flexible in adjusting to find a variety of alternative solutions. 

Keywords: Multi-mode resource constrained project scheduling problem, Fuzzy set, Goal programming, Preemptive fuzzy goal programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2758
260 An Improved Learning Algorithm based on the Conjugate Gradient Method for Back Propagation Neural Networks

Authors: N. M. Nawi, M. R. Ransing, R. S. Ransing

Abstract:

The conjugate gradient optimization algorithm usually used for nonlinear least squares is presented and is combined with the modified back propagation algorithm yielding a new fast training multilayer perceptron (MLP) algorithm (CGFR/AG). The approaches presented in the paper consist of three steps: (1) Modification on standard back propagation algorithm by introducing gain variation term of the activation function, (2) Calculating the gradient descent on error with respect to the weights and gains values and (3) the determination of the new search direction by exploiting the information calculated by gradient descent in step (2) as well as the previous search direction. The proposed method improved the training efficiency of back propagation algorithm by adaptively modifying the initial search direction. Performance of the proposed method is demonstrated by comparing to the conjugate gradient algorithm from neural network toolbox for the chosen benchmark. The results show that the number of iterations required by the proposed method to converge is less than 20% of what is required by the standard conjugate gradient and neural network toolbox algorithm.

Keywords: Back-propagation, activation function, conjugategradient, search direction, gain variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2838
259 Statically Fused Unbiased Converted Measurements Kalman Filter

Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou

Abstract:

Active radar and sonar systems often report Doppler measurements in addition to the position measurements such as range and bearing. The tracker can perform better by making full use of the Doppler measurements. However, due to the high nonlinearity of the Doppler measurements with respect to the target state in the Cartesian coordinate systems, those measurements are not always fully exploited. This paper mainly focuses on dealing with the Doppler measurements as well as the position measurements in Polar coordinates. The Statically Fused Converted Position and Doppler Measurements Kalman Filter (SF-CMKF) with additive debiased measurement conversion has been presented. However, the exact compensation for the bias of the measurement conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in the large angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for two-dimensional (Polar-to-Cartesian) tracking are derived, and the SF-CMKF is improved by using those conversion. Monte Carlo simulations are presented to demonstrate the statistic consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).

Keywords: Measurement conversion, Doppler, Kalman filter, estimation, tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 376
258 Management of Air Pollutants from Point Sources

Authors: N. Lokeshwari, G. Srinikethan, V. S. Hegde

Abstract:

Monitoring is essential to assessing the effectiveness of air pollution control actions. The goal of the air quality information system is through monitoring, to keep authorities, major polluters and the public informed on the short and long-term changes in air quality, thereby helping to raise awareness. Mathematical models are the best tools available for the prediction of the air quality management. The main objective of the work was to apply a Model that predicts the concentration levels of different pollutants at any instant of time. In this study, distribution of air pollutants concentration such as nitrogen dioxides (NO2), sulphur dioxides (SO2) and total suspended particulates (TSP) of industries are determined by using Gaussian model. Besides that, the effect of wind speed and its direction on the pollutant concentration within the affected area were evaluated. In order to determine the efficiency and percentage of error in the modeling, validation process of data was done. Sampling of air quality was conducted in getting existing air quality around a factory and the concentrations of pollutants in a plume were inversely proportional to wind velocity. The resultant ground level concentrations were then compared to the quality standards to determine if there could be a negative impact on health. This study concludes that concentration of pollutants can be significantly predicted using Gaussian Model. The data base management is developed for the air data of Hubli-Dharwad region.

Keywords: DBMS, NO2, SO2, Wind rose plots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
257 Biological Methods to Control Parasitic Weed Phelipanche ramosa L. Pomel in the Field Tomato Crop

Authors: F. Lops, G. Disciglio, A. Carlucci, G. Gatta, L. Frabboni, A. Tarantino, E. Tarantino

Abstract:

Phelipanche ramosa L. Pomel is a root holoparasitic weed plant of many cultivations, particularly of tomato (Lycopersicum esculentum L.) crop. In Italy, Phelipanche problem is increasing, both in density and in acreage. The biological control of this parasitic weed involves the use of living organisms as numerous fungi and bacteria that can infect the parasitic weed, while it may improve the crop growth. This paper deals with the biocontrol with microorganism, including Arbuscular mycorrhizal (AM) fungi and fungal pathogens as Fusarium oxisporum spp. Colonization of crop roots by AM fungi can provide protection of crops against parasitic weeds because of a reduction in their seed germination and attachment, while F. oxisporum, isolated from diseased broomrape tubercles, proved to be highly virulent on P. ramosa. The experimental trial was carried out in open field at Foggia province (Apulia Region, Southern Italy), during the spring-summer season 2016, in order to evaluate the effect of four biological treatments: AM fungi and Fusarium oxisporum applied in the soil alone or combined together, and Rizosum Max® product, compared with the untreated control, to reduce the P. ramosa infestation in processing tomato crop. The principal results to be drawn from this study under field condition, in contrast of those reported previously under laboratory and greenhouse conditions, show that both AM fungi and F. oxisporum do not provide the reduction of the number of emerged shoots of P. ramosa. This can arise probably from the low efficacy seedling of the agent pathogens for the control of this parasite in the field. On the contrary, the Rizosum Max® product, containing AM fungi and some rizophere bacteria combined with several minerals and organic substances, appears to be most effective for the reduction of P. ramosa infestation.

Keywords: Arbuscular mycorrhizal fungi, biocontrol methods, Phelipanche ramosa, F. oxisporum spp.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1066
256 Comparison of ANFIS and ANN for Estimation of Biochemical Oxygen Demand Parameter in Surface Water

Authors: S. Areerachakul

Abstract:

Nowadays, several techniques such as; Fuzzy Inference System (FIS) and Neural Network (NN) are employed for developing of the predictive models to estimate parameters of water quality. The main objective of this study is to compare between the predictive ability of the Adaptive Neuro-Fuzzy Inference System (ANFIS) model and Artificial Neural Network (ANN) model to estimate the Biochemical Oxygen Demand (BOD) on data from 11 sampling sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2004-2011. The five parameters of water quality namely Dissolved Oxygen (DO), Chemical Oxygen Demand (COD), Ammonia Nitrogen (NH3N), Nitrate Nitrogen (NO3N), and Total Coliform bacteria (T-coliform) are used as the input of the models. These water quality indices affect the biochemical oxygen demand. The experimental results indicate that the ANN model provides a higher correlation coefficient (R=0.73) and a lower root mean square error (RMSE=4.53) than the corresponding ANFIS model.

Keywords: adaptive neuro-fuzzy inference system, artificial neural network, biochemical oxygen demand, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2527
255 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L.- Pomel) in Tomato Crop

Authors: G. Disciglio, F. Lops, A. Carlucci, G. Gatta, A. Tarantino, E. Tarantino

Abstract:

Phelipanche ramosa is the most damaging obligate flowering parasitic weed on wide species of cultivated plants. The semi-arid regions of the world are considered the main centers of this parasitic plant that causes heavy infestation. This is due to its production of high numbers of seeds (up to 200,000) that remain viable for extended periods (up to 20 years). In this study, 13 treatments for the control of Phelipanche were carried out, which included agronomic, chemical, and biological treatments and the use of resistant plant methods. In 2014, a trial was performed at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy), on processing tomato (cv ‘Docet’) grown in pots filled with soil taken from a field that was heavily infested by P. ramosa). The tomato seedlings were transplanted on May 8, 2014, into a sandy-clay soil (USDA). A randomized block design with 3 replicates (pots) was adopted. During the growing cycle of the tomato, at 70, 75, 81 and 88 days after transplantation, the number of P. ramosa shoots emerged in each pot was determined. The tomato fruit were harvested on August 8, 2014, and the quantitative and qualitative parameters were determined. All of the data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc. Cary, NC, USA), and for comparisons of means (Tukey's tests). The data show that each treatment studied did not provide complete control against P. ramosa. However, the virulence of the attacks was mitigated by some of the treatments tried: radicon biostimulant, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone, and the resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments with each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: Control methods, Phelipanche ramosa, tomato crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3051
254 Long Term Evolution Multiple-Input Multiple-Output Network in Unmanned Air Vehicles Platform

Authors: Ashagrie Getnet Flattie

Abstract:

Line-of-sight (LOS) information, data rates, good quality, and flexible network service are limited by the fact that, for the duration of any given connection, they experience severe variation in signal strength due to fading and path loss. Wireless system faces major challenges in achieving wide coverage and capacity without affecting the system performance and to access data everywhere, all the time. In this paper, the cell coverage and edge rate of different Multiple-input multiple-output (MIMO) schemes in 20 MHz Long Term Evolution (LTE) system under Unmanned Air Vehicles (UAV) platform are investigated. After some background on the enormous potential of UAV, MIMO, and LTE in wireless links, the paper highlights the presented system model which attempts to realize the various benefits of MIMO being incorporated into UAV platform. The performances of the three MIMO LTE schemes are compared with the performance of 4x4 MIMO LTE in UAV scheme carried out to evaluate the improvement in cell radius, BER, and data throughput of the system in different morphology. The results show that significant performance gains such as bit error rate (BER), data rate, and coverage can be achieved by using the presented scenario.

Keywords: BER, LTE, MIMO, path loss, UAV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
253 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

Authors: Deepti Tamrakar, Pritee Khanna

Abstract:

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
252 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach

Authors: Ejaz Khan, Conor Heneghan

Abstract:

In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.

Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
251 Regionalization of IDF Curves with L-Moments for Storm Events

Authors: Noratiqah Mohd Ariff, Abdul Aziz Jemain, Mohd Aftar Abu Bakar

Abstract:

The construction of Intensity-Duration-Frequency (IDF) curves is one of the most common and useful tools in order to design hydraulic structures and to provide a mathematical relationship between rainfall characteristics. IDF curves, especially those in Peninsular Malaysia, are often built using moving windows of rainfalls. However, these windows do not represent the actual rainfall events since the duration of rainfalls is usually prefixed. Hence, instead of using moving windows, this study aims to find regionalized distributions for IDF curves of extreme rainfalls based on storm events. Homogeneity test is performed on annual maximum of storm intensities to identify homogeneous regions of storms in Peninsular Malaysia. The L-moment method is then used to regionalized Generalized Extreme Value (GEV) distribution of these annual maximums and subsequently. IDF curves are constructed using the regional distributions. The differences between the IDF curves obtained and IDF curves found using at-site GEV distributions are observed through the computation of the coefficient of variation of root mean square error, mean percentage difference and the coefficient of determination. The small differences implied that the construction of IDF curves could be simplified by finding a general probability distribution of each region. This will also help in constructing IDF curves for sites with no rainfall station.

Keywords: IDF curves, L-moments, regionalization, storm events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1715
250 Optical Signal-To-Noise Ratio Monitoring Based on Delay Tap Sampling Using Artificial Neural Network

Authors: Feng Wang, Shencheng Ni, Shuying Han, Shanhong You

Abstract:

With the development of optical communication, optical performance monitoring (OPM) has received more and more attentions. Since optical signal-to-noise ratio (OSNR) is directly related to bit error rate (BER), it is one of the important parameters in optical networks. Recently, artificial neural network (ANN) has been greatly developed. ANN has strong learning and generalization ability. In this paper, a method of OSNR monitoring based on delay-tap sampling (DTS) and ANN has been proposed. DTS technique is used to extract the eigenvalues of the signal. Then, the eigenvalues are input into the ANN to realize the OSNR monitoring. The experiments of 10 Gb/s non-return-to-zero (NRZ) on–off keying (OOK), 20 Gb/s pulse amplitude modulation (PAM4) and 20 Gb/s return-to-zero (RZ) differential phase-shift keying (DPSK) systems are demonstrated for the OSNR monitoring based on the proposed method. The experimental results show that the range of OSNR monitoring is from 15 to 30 dB and the root-mean-square errors (RMSEs) for 10 Gb/s NRZ-OOK, 20 Gb/s PAM4 and 20 Gb/s RZ-DPSK systems are 0.36 dB, 0.45 dB and 0.48 dB respectively. The impact of chromatic dispersion (CD) on the accuracy of OSNR monitoring is also investigated in the three experimental systems mentioned above.

Keywords: Artificial neural network, ANN, chromatic dispersion, delay-tap sampling, optical signal-to-noise ratio, OSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 713
249 Monitoring Blood Pressure Using Regression Techniques

Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim

Abstract:

Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.

Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 687
248 Minimum-Fuel Optimal Trajectory for Reusable First-Stage Rocket Landing Using Particle Swarm Optimization

Authors: Kevin Spencer G. Anglim, Zhenyu Zhang, Qingbin Gao

Abstract:

Reusable launch vehicles (RLVs) present a more environmentally-friendly approach to accessing space when compared to traditional launch vehicles that are discarded after each flight. This paper studies the recyclable nature of RLVs by presenting a solution method for determining minimum-fuel optimal trajectories using principles from optimal control theory and particle swarm optimization (PSO). This problem is formulated as a minimum-landing error powered descent problem where it is desired to move the RLV from a fixed set of initial conditions to three different sets of terminal conditions. However, unlike other powered descent studies, this paper considers the highly nonlinear effects caused by atmospheric drag, which are often ignored for studies on the Moon or on Mars. Rather than optimizing the controls directly, the throttle control is assumed to be bang-off-bang with a predetermined thrust direction for each phase of flight. The PSO method is verified in a one-dimensional comparison study, and it is then applied to the two-dimensional cases, the results of which are illustrated.

Keywords: Minimum-fuel optimal trajectory, particle swarm optimization, reusable rocket, SpaceX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2013
247 Determination of Stress-Strain Characteristics of Railhead Steel using Image Analysis

Authors: Bandula-Heva, T., Dhanasekar, M.

Abstract:

True stress-strain curve of railhead steel is required to investigate the behaviour of railhead under wheel loading through elasto-plastic Finite Element (FE) analysis. To reduce the rate of wear, the railhead material is hardened through annealing and quenching. The Australian standard rail sections are not fully hardened and hence suffer from non-uniform distribution of the material property; usage of average properties in the FE modelling can potentially induce error in the predicted plastic strains. Coupons obtained at varying depths of the railhead were, therefore, tested under axial tension and the strains were measured using strain gauges as well as an image analysis technique, known as the Particle Image Velocimetry (PIV). The head hardened steel exhibit existence of three distinct zones of yield strength; the yield strength as the ratio of the average yield strength provided in the standard (σyr=780MPa) and the corresponding depth as the ratio of the head hardened zone along the axis of symmetry are as follows: (1.17 σyr, 20%), (1.06 σyr, 20%-80%) and (0.71 σyr, > 80%). The stress-strain curves exhibit limited plastic zone with fracture occurring at strain less than 0.1.

Keywords: Stress-Strain Curve, Tensile Test, Particle Image Velocimetry, Railhead Metal Properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3447
246 Vegetation Index-Deduced Crop Coefficient of Wheat (Triticum aestivum) Using Remote Sensing: Case Study on Four Basins of Golestan Province, Iran

Authors: Hoda Zolfagharnejad, Behnam Kamkar, Omid Abdi

Abstract:

Crop coefficient (Kc) is an important factor contributing to estimation of evapotranspiration, and is also used to determine the irrigation schedule. This study investigated and determined the monthly Kc of winter wheat (Triticum aestivum L.) using five vegetation indices (VIs): Normalized Difference Vegetation Index (NDVI), Difference Vegetation Index (DVI), Soil Adjusted Vegetation Index (SAVI), Infrared Percentage Vegetation Index (IPVI), and Ratio Vegetation Index (RVI) of four basins in Golestan province, Iran. 14 Landsat-8 images according to crop growth stage were used to estimate monthly Kc of wheat. VIs were calculated based on infrared and near infrared bands of Landsat 8 images using Geographical Information System (GIS) software. The best VIs were chosen after establishing a regression relationship among these VIs with FAO Kc and Kc that was modified for the study area by the previous research based on R² and Root Mean Square Error (RMSE). The result showed that local modified SAVI with R²= 0.767 and RMSE= 0.174 was the best index to produce monthly wheat Kc maps.

Keywords: Crop coefficient, remote sensing, vegetation indices, wheat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 951