Search results for: computational error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2298

Search results for: computational error

1098 A Detection Method of Faults in Railway Pantographs Based on Dynamic Phase Plots

Authors: G. Santamato, M. Solazzi, A. Frisoli

Abstract:

Systems for detection of damages in railway pantographs effectively reduce the cost of maintenance and improve time scheduling. In this paper, we present an approach to design a monitoring tool fitting strong customer requirements such as portability and ease of use. Pantograph has been modeled to estimate its dynamical properties, since no data are available. With the aim to focus on suspensions health, a two Degrees of Freedom (DOF) scheme has been adopted. Parameters have been calculated by means of analytical dynamics. A Finite Element Method (FEM) modal analysis verified the former model with an acceptable error. The detection strategy seeks phase-plots topology alteration, induced by defects. In order to test the suitability of the method, leakage in the dashpot was simulated on the lumped model. Results are interesting because changes in phase plots are more appreciable than frequency-shift. Further calculations as well as experimental tests will support future developments of this smart strategy.

Keywords: Pantograph models, phase-plots, structural health monitoring, vibration-based condition monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1479
1097 Predicting Extrusion Process Parameters Using Neural Networks

Authors: Sachin Man Bajimaya, SangChul Park, Gi-Nam Wang

Abstract:

The objective of this paper is to estimate realistic principal extrusion process parameters by means of artificial neural network. Conventionally, finite element analysis is used to derive process parameters. However, the finite element analysis of the extrusion model does not consider the manufacturing process constraints in its modeling. Therefore, the process parameters obtained through such an analysis remains highly theoretical. Alternatively, process development in industrial extrusion is to a great extent based on trial and error and often involves full-size experiments, which are both expensive and time-consuming. The artificial neural network-based estimation of the extrusion process parameters prior to plant execution helps to make the actual extrusion operation more efficient because more realistic parameters may be obtained. And so, it bridges the gap between simulation and real manufacturing execution system. In this work, a suitable neural network is designed which is trained using an appropriate learning algorithm. The network so trained is used to predict the manufacturing process parameters.

Keywords: Artificial Neural Network (ANN), Indirect Extrusion, Finite Element Analysis, MES.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2357
1096 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: Steganography, watermarking, private keys, time complexity measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
1095 Personal Authentication Using FDOST in Finger Knuckle-Print Biometrics

Authors: N. B. Mahesh Kumar, K. Premalatha

Abstract:

The inherent skin patterns created at the joints in the finger exterior are referred as finger knuckle-print. It is exploited to identify a person in a unique manner because the finger knuckle print is greatly affluent in textures. In biometric system, the region of interest is utilized for the feature extraction algorithm. In this paper, local and global features are extracted separately. Fast Discrete Orthonormal Stockwell Transform is exploited to extract the local features. Global feature is attained by escalating the size of Fast Discrete Orthonormal Stockwell Transform to infinity. Two features are fused to increase the recognition accuracy. A matching distance is calculated for both the features individually. Then two distances are merged mutually to acquire the final matching distance. The proposed scheme gives the better performance in terms of equal error rate and correct recognition rate.

Keywords: Hamming distance, Instantaneous phase, Region of Interest, Recognition accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2752
1094 Comparison between Haar and Daubechies Wavelet Transformations on FPGA Technology

Authors: Fatma H. Elfouly, Mohamed I. Mahmoud, Moawad I. M. Dessouky, Salah Deyab

Abstract:

Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the Bit Error Rate (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. From the BER, it is seen that the implementations execute the operation of the wavelet transform correctly and satisfying the perfect reconstruction conditions. The design procedure has been explained and designed using the stat-ofart Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.

Keywords: Daubechies wavelet, discrete wavelet transform, Haar wavelet, Xilinx FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7223
1093 Modeling the Effect of Spacer Orientation on Heat Transfer in Membrane Distillation

Authors: M. Shakaib, M. Ehtesham-ul Haq, I. Ahmed, R.M. Yunus

Abstract:

Computational fluid dynamics (CFD) simulations carried out in this paper show that spacer orientation has a major influence on temperature patterns and on the heat transfer rates. The local heat flux values significantly vary from high to very low values at each filament when spacer touches the membrane surface. The heat flux profile is more uniform when spacer filaments are not in contact with the membrane thus making this arrangement more beneficial. The temperature polarization is also found to be less in this case when compared to the empty channel.

Keywords: heat transfer, membrane distillation, spacer, temperature polarization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1717
1092 Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study

Authors: Negar Riazifar, Mehran Yazdi

Abstract:

Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.

Keywords: Computed Tomography (CT), DWT, Discrete Contourlet Transform, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2793
1091 Statistical Analysis of First Order Plus Dead-time System using Operational Matrix

Authors: Pham Luu Trung Duong, Moonyong Lee

Abstract:

To increase precision and reliability of automatic control systems, we have to take into account of random factors affecting the control system. Thus, operational matrix technique is used for statistical analysis of first order plus time delay system with uniform random parameter. Examples with deterministic and stochastic disturbance are considered to demonstrate the validity of the method. Comparison with Monte Carlo method is made to show the computational effectiveness of the method.

Keywords: First order plus dead-time, Operational matrix, Statistical analysis, Walsh function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
1090 Neural Network Ensemble-based Solar Power Generation Short-Term Forecasting

Authors: A. Chaouachi, R.M. Kamel, R. Ichikawa, H. Hayashi, K. Nagasaka

Abstract:

This paper presents the applicability of artificial neural networks for 24 hour ahead solar power generation forecasting of a 20 kW photovoltaic system, the developed forecasting is suitable for a reliable Microgrid energy management. In total four neural networks were proposed, namely: multi-layred perceptron, radial basis function, recurrent and a neural network ensemble consisting in ensemble of bagged networks. Forecasting reliability of the proposed neural networks was carried out in terms forecasting error performance basing on statistical and graphical methods. The experimental results showed that all the proposed networks achieved an acceptable forecasting accuracy. In term of comparison the neural network ensemble gives the highest precision forecasting comparing to the conventional networks. In fact, each network of the ensemble over-fits to some extent and leads to a diversity which enhances the noise tolerance and the forecasting generalization performance comparing to the conventional networks.

Keywords: Neural network ensemble, Solar power generation, 24 hour forecasting, Comparative study

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3268
1089 A Bayesian Network Reliability Modeling for FlexRay Systems

Authors: Kuen-Long Leu, Yung-Yuan Chen, Chin-Long Wey, Jwu-E Chen, Chung-Hsien Hsu

Abstract:

The increasing importance of FlexRay systems in automotive domain inspires unceasingly relative researches. One primary issue among researches is to verify the reliability of FlexRay systems either from protocol aspect or from system design aspect. However, research rarely discusses the effect of network topology on the system reliability. In this paper, we will illustrate how to model the reliability of FlexRay systems with various network topologies by a well-known probabilistic reasoning technology, Bayesian Network. In this illustration, we especially investigate the effectiveness of error containment built in star topology and fault-tolerant midpoint synchronization algorithm adopted in FlexRay communication protocol. Through a FlexRay steer-by-wire case study, the influence of different topologies on the failure probability of the FlexRay steerby- wire system is demonstrated. The notable value of this research is to show that the Bayesian Network inference is a powerful and feasible method for the reliability assessment of FlexRay systems.

Keywords: Bayesian Network, FlexRay, fault tolerance, network topology, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
1088 Intelligent Agent Approach to the Control of Critical Infrastructure Networks

Authors: James D. Gadze, Niki Pissinou, Kia Makki

Abstract:

In this paper we propose an intelligent agent approach to control the electric power grid at a smaller granularity in order to give it self-healing capabilities. We develop a method using the influence model to transform transmission substations into information processing, analyzing and decision making (intelligent behavior) units. We also develop a wireless communication method to deliver real-time uncorrupted information to an intelligent controller in a power system environment. A combined networking and information theoretic approach is adopted in meeting both the delay and error probability requirements. We use a mobile agent approach in optimizing the achievable information rate vector and in the distribution of rates to users (sensors). We developed the concept and the quantitative tools require in the creation of cooperating semiautonomous subsystems which puts the electric grid on the path towards intelligent and self-healing system.

Keywords: Mobile agent, power system operation and control, real time, wireless communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
1087 Gait Recognition System: Bundle Rectangle Approach

Authors: Edward Guillen, Daniel Padilla, Adriana Hernandez, Kenneth Barner

Abstract:

Biometrics methods include recognition techniques such as fingerprint, iris, hand geometry, voice, face, ears and gait. The gait recognition approach has some advantages, for example it does not need the prior concern of the observed subject and it can record many biometric features in order to make deeper analysis, but most of the research proposals use high computational cost. This paper shows a gait recognition system with feature subtraction on a bundle rectangle drawn over the observed person. Statistical results within a database of 500 videos are shown.

Keywords: Autentication, Biometrics, Gait Recognition, Human Identification, Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
1086 Application of Artificial Neural Network to Forecast Actual Cost of a Project to Improve Earned Value Management System

Authors: Seyed Hossein Iranmanesh, Mansoureh Zarezadeh

Abstract:

This paper presents an application of Artificial Neural Network (ANN) to forecast actual cost of a project based on the earned value management system (EVMS). For this purpose, some projects randomly selected based on the standard data set , and it is produced necessary progress data such as actual cost ,actual percent complete , baseline cost and percent complete for five periods of project. Then an ANN with five inputs and five outputs and one hidden layer is trained to produce forecasted actual costs. The comparison between real and forecasted data show better performance based on the Mean Absolute Percentage Error (MAPE) criterion. This approach could be applicable to better forecasting the project cost and result in decreasing the risk of project cost overrun, and therefore it is beneficial for planning preventive actions.

Keywords: Earned Value Management System (EVMS), Artificial Neural Network (ANN), Estimate At Completion, Forecasting Methods, Project Performance Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2754
1085 Quantifying and Adjusting the Effects of Publication Bias in Continuous Meta-Analysis

Authors: N.R.N. Idris

Abstract:

This study uses simulated meta-analysis to assess the effects of publication bias on meta-analysis estimates and to evaluate the efficacy of the trim and fill method in adjusting for these biases. The estimated effect sizes and the standard error were evaluated in terms of the statistical bias and the coverage probability. The results demonstrate that if publication bias is not adjusted it could lead to up to 40% bias in the treatment effect estimates. Utilization of the trim and fill method could reduce the bias in the overall estimate by more than half. The method is optimum in presence of moderate underlying bias but has minimal effects in presence of low and severe publication bias. Additionally, the trim and fill method improves the coverage probability by more than half when subjected to the same level of publication bias as those of the unadjusted data. The method however tends to produce false positive results and will incorrectly adjust the data for publication bias up to 45 % of the time. Nonetheless, the bias introduced into the estimates due to this adjustment is minimal

Keywords: Publication bias, Trim and Fill method, percentage relative bias, coverage probability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
1084 Blind Identification Channel Using Higher Order Cumulants with Application to Equalization for MC−CDMA System

Authors: Mohammed Zidane, Said Safi, Mohamed Sabri, Ahmed Boumezzough

Abstract:

In this paper we propose an algorithm based on higher order cumulants, for blind impulse response identification of frequency radio channels and downlink (MC−CDMA) system Equalization. In order to test its efficiency, we have compared with another algorithm proposed in the literature, for that we considered on theoretical channel as the Proakis’s ‘B’ channel and practical frequency selective fading channel, called Broadband Radio Access Network (BRAN C), normalized for (MC−CDMA) systems, excited by non-Gaussian sequences. In the part of (MC−CDMA), we use the Minimum Mean Square Error (MMSE) equalizer after the channel identification to correct the channel’s distortion. The simulation results, in noisy environment and for different signal to noise ratio (SNR), are presented to illustrate the accuracy of the proposed algorithm.

Keywords: Blind identification and equalization, Higher Order Cumulants, (MC−CDMA) system, MMSE equalizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1774
1083 Real Time Compensation of Machining Errors for Machine Tools NC based on Systematic Dispersion

Authors: M. Rahou, A. Cheikh, F. Sebaa

Abstract:

Manufacturing tolerancing is intended to determine the intermediate geometrical and dimensional states of the part during its manufacturing process. These manufacturing dimensions also serve to satisfy not only the functional requirements given in the definition drawing, but also the manufacturing constraints, for example geometrical defects of the machine, vibration and the wear of the cutting tool. In this paper, an experimental study on the influence of the wear of the cutting tool (systematic dispersions) is explored. This study was carried out on three stages .The first stage allows machining without elimination of dispersions (random, systematic) so the tolerances of manufacture according to total dispersions. In the second stage, the results of the first stage are filtered in such way to obtain the tolerances according to random dispersions. Finally, from the two previous stages, the systematic dispersions are generated. The objective of this study is to model by the least squares method the error of manufacture based on systematic dispersion. Finally, an approach of optimization of the manufacturing tolerances was developed for machining on a CNC machine tool

Keywords: Dispersions, Compensation, modeling, manufacturing Tolerance, machine tool.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
1082 An Energy Efficient Digital Baseband for Batteryless Remote Control

Authors: Wei-Da Toh, Yuan Gao, Minkyu Je

Abstract:

In this paper, an energy efficient digital baseband circuit for piezoelectric (PE) harvester powered batteryless remote control system is presented. Pulse mode PE harvester, which provides short duration of energy, is adopted to replace conventional chemical battery in wireless remote controller. The transmitter digital baseband repeats the control command transmission once the digital circuit is initiated by the power-on-reset. A power efficient data frame format is proposed to maximize the transmission repetition time. By using the proposed frame format and receiver clock and data recovery method, the receiver baseband is able to decode the command even when the received data has 20% error. The proposed transmitter and receiver baseband are implemented using FPGA and simulation results are presented.

Keywords: Clock and Data Recovery (CDR), Correlator, Digital Baseband, Gold Code, Power-On-Reset.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
1081 Structural Evaluation of Airfield Pavement Using Finite Element Analysis Based Methodology

Authors: Richard Ji

Abstract:

Nondestructive deflection testing has been accepted widely as a cost-effective tool for evaluating the structural condition of airfield pavements. Backcalculation of pavement layer moduli can be used to characterize the pavement existing condition in order to compute the load bearing capacity of pavement. This paper presents an improved best-fit backcalculation methodology based on deflection predictions obtained using finite element method (FEM). The best-fit approach is based on minimizing the squared error between falling weight deflectometer (FWD) measured deflections and FEM predicted deflections. Then, concrete elastic modulus and modulus of subgrade reaction were back-calculated using Heavy Weight Deflectometer (HWD) deflections collected at the National Airport Pavement Testing Facility (NAPTF) test site. It is an alternative and more versatile method in considering concrete slab geometry and HWD testing locations compared to methods currently available.

Keywords: Nondestructive testing, Pavement moduli backcalculation, Finite Element Method, FEM, concrete pavements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
1080 Variable Step-Size Affine Projection Algorithm With a Weighted and Regularized Projection Matrix

Authors: Tao Dai, Andy Adler, Behnam Shahrrava

Abstract:

This paper presents a forgetting factor scheme for variable step-size affine projection algorithms (APA). The proposed scheme uses a forgetting processed input matrix as the projection matrix of pseudo-inverse to estimate system deviation. This method introduces temporal weights into the projection matrix, which is typically a better model of the real error's behavior than homogeneous temporal weights. The regularization overcomes the ill-conditioning introduced by both the forgetting process and the increasing size of the input matrix. This algorithm is tested by independent trials with coloured input signals and various parameter combinations. Results show that the proposed algorithm is superior in terms of convergence rate and misadjustment compared to existing algorithms. As a special case, a variable step size NLMS with forgetting factor is also presented in this paper.

Keywords: Adaptive signal processing, affine projection algorithms, variable step-size adaptive algorithms, regularization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
1079 Explorative Data Mining of Constructivist Learning Experiences and Activities with Multiple Dimensions

Authors: Patrick Wessa, Bart Baesens

Abstract:

This paper discusses the use of explorative data mining tools that allow the educator to explore new relationships between reported learning experiences and actual activities, even if there are multiple dimensions with a large number of measured items. The underlying technology is based on the so-called Compendium Platform for Reproducible Computing (http://www.freestatistics.org) which was built on top the computational R Framework (http://www.wessa.net).

Keywords: Reproducible computing, data mining, explorative data analysis, compendium technology, computer assisted education

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
1078 An Improved Transfer Logic of the Two-Path Algorithm for Acoustic Echo Cancellation

Authors: Chang Liu, Zishu He

Abstract:

Adaptive echo cancellers with two-path algorithm are applied to avoid the false adaptation during the double-talk situation. In the two-path algorithm, several transfer logic solutions have been proposed to control the filter update. This paper presents an improved transfer logic solution. It improves the convergence speed of the two-path algorithm, and allows the reduction of the memory elements and computational complexity. Results of simulations show the improved performance of the proposed solution.

Keywords: Acoustic echo cancellation, Echo return lossenhancement (ERLE), Two-path algorithm, Transfer logic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
1077 Evaluating per-user Fairness of Goal-Oriented Parallel Computer Job Scheduling Policies

Authors: Sangsuree Vasupongayya

Abstract:

Fair share objective has been included into the goaloriented parallel computer job scheduling policy recently. However, the previous work only presented the overall scheduling performance. Thus, the per-user performance of the policy is still lacking. In this work, the details of per-user fair share performance under the Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair share priority backfill policy namely RelShare(1d) is also studied. The performance of all policies is collected using an event-driven simulator with three real job traces as input. The experimental results show that the high demand users are usually benefited under most policies because their jobs are large or they have a lot of jobs. In the large job case, one job executed may result in over-share during that period. In the other case, the jobs may be backfilled for performances. However, the users with a mixture of jobs may suffer because if the smaller jobs are executing the priority of the remaining jobs from the same user will be lower. Further analysis does not show any significant impact of users with a lot of jobs or users with a large runtime approximation error.

Keywords: deviation, fair share, discrepancy search, priority scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346
1076 NonStationary CMA for Decision Feedback Equalization of Markovian Time Varying Channels

Authors: S. Cherif, M. Turki-Hadj Alouane

Abstract:

In this paper, we propose a modified version of the Constant Modulus Algorithm (CMA) tailored for blind Decision Feedback Equalizer (DFE) of first order Markovian time varying channels. The proposed NonStationary CMA (NSCMA) is designed so that it explicitly takes into account the Markovian structure of the channel nonstationarity. Hence, unlike the classical CMA, the NSCMA is not blind with respect to the channel time variations. This greatly helps the equalizer in the case of realistic channels, and avoids frequent transmissions of training sequences. This paper develops a theoretical analysis of the steady state performance of the CMA and the NSCMA for DFEs within a time varying context. Therefore, approximate expressions of the mean square errors are derived. We prove that in the steady state, the NSCMA exhibits better performance than the classical CMA. These new results are confirmed by simulation. Through an experimental study, we demonstrate that the Bit Error Rate (BER) is reduced by the NSCMA-DFE, and the improvement of the BER achieved by the NSCMA-DFE is as significant as the channel time variations are severe.

Keywords: Time varying channel, Markov model, Blind DFE, CMA, NSCMA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
1075 Development of Analytical Model of Bending Force during 3-Roller Conical Bending Process and Its Experimental Verification

Authors: Mahesh Chudasama, Harit Raval

Abstract:

Conical sections and shells made from metal plates are widely used in various industrial applications. 3-roller conical bending process is preferably used to produce such conical sections and shells. Bending mechanics involved in the process is complex and little work is done in this area. In the present paper an analytical model is developed to predict bending force which will be acting during 3-roller conical bending process. To verify the developed model, conical bending experiments are performed. Analytical results and experimental results were compared. Force predicted by analytical model is in close proximity of the experimental results. The error in the prediction is ±10%. Hence the model gives quite satisfactory results. Present model is also compared with the previously published bending force prediction model and it is found that the present model gives better results. The developed model can be used to estimate the bending force during 3-roller bending process and can be useful to the designers for designing the 3-roller conical bending machine.

Keywords: Bending-force, Experimental-verification, Internal-moment, Roll-bending.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4013
1074 Building Gabor Filters from Retinal Responses

Authors: Johannes Partzsch, Christian Mayr, Rene Schuffny

Abstract:

Starting from a biologically inspired framework, Gabor filters were built up from retinal filters via LMSE algorithms. Asubset of retinal filter kernels was chosen to form a particular Gabor filter by using a weighted sum. One-dimensional optimization approaches were shown to be inappropriate for the problem. All model parameters were fixed with biological or image processing constraints. Detailed analysis of the optimization procedure led to the introduction of a minimization constraint. Finally, quantization of weighting factors was investigated. This resulted in an optimized cascaded structure of a Gabor filter bank implementation with lower computational cost.

Keywords: Gabor filter, image processing, optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2392
1073 Ultra High Speed Approach for Document Skew Detection and Correction Based On Centre of Gravity

Authors: Seyyed Yasser Hashemi

Abstract:

Skew detection and correction (SDC) has a direct effect in efficiency and exactitude of documents’ segmentation and analysis and thus is considered as a very important step in documents’ analysis field. Skew is a major problem in documents’ analysis for every language. For Arabic/Persian document scripts this problem is more severe because of special features of these languages. In this paper an efficient and fast algorithm for Document Skew Detection (DSD) based on the concept of segmentation and Center of Gravity (COG) is proposed. This algorithm is examined for 150 Arabic/Persian and English documents and SDC process are done successfully for 93 percent of documents with error rate of less than 1°. This algorithm shows better results for English documents compared to Arabic/Persian documents. The proposed method is also represents favorable results for handwritten, printed and also complicated documents such as newspapers and journals even with very low quality and resolution.

Keywords: Arabic/Persian document, Baseline, Centre of gravity, Document segmentation, Skew detection and correction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
1072 Economic Loss due to Ganoderma Disease in Oil Palm

Authors: K. Assis, K. P. Chong, A. S. Idris, C. M. Ho

Abstract:

Oil palm or Elaeis guineensis is considered as the golden crop in Malaysia. But oil palm industry in this country is now facing with the most devastating disease called as Ganoderma Basal Stem Rot disease. The objective of this paper is to analyze the economic loss due to this disease. There were three commercial oil palm sites selected for collecting the required data for economic analysis. Yield parameter used to measure the loss was the total weight of fresh fruit bunch in six months. The predictors include disease severity, change in disease severity, number of infected neighbor palms, age of palm, planting generation, topography, and first order interaction variables. The estimation model of yield loss was identified by using backward elimination based regression method. Diagnostic checking was conducted on the residual of the best yield loss model. The value of mean absolute percentage error (MAPE) was used to measure the forecast performance of the model. The best yield loss model was then used to estimate the economic loss by using the current monthly price of fresh fruit bunch at mill gate.

Keywords: Ganoderma, oil palm, regression model, yield loss, economic loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3220
1071 The Accuracy of the Flight Derivative Estimates Derived from Flight Data

Authors: Jung-hoon Lee, Eung Tai Kim, Byung-hee Chang, In-hee Hwang, Dae-sung Lee

Abstract:

The accuracy of estimated stability and control derivatives of a light aircraft from flight test data were evaluated. The light aircraft, named ChangGong-91, is the first certified aircraft from the Korean government. The output error method, which is a maximum likelihood estimation technique and considers measurement noise only, was used to analyze the aircraft responses measures. The multi-step control inputs were applied in order to excite the short period mode for the longitudinal and Dutch-roll mode for the lateral-directional motion. The estimated stability/control derivatives of Chan Gong-91 were analyzed for the assessment of handling qualities comparing them with those of similar aircraft. The accuracy of the flight derivative estimates derived from flight test measurement was examined in engineering judgment, scatter and Cramer-Rao bound, which turned out to be satisfactory with minor defects..

Keywords: Light Aircraft, Flight Test, Accuracy, Engineering Judgment, Scatter, Cramer-Rao Bound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946
1070 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network

Authors: K. Kalaikumar, E. Baburaj

Abstract:

One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.

Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
1069 Comparison of Newton Raphson and Gauss Seidel Methods for Power Flow Analysis

Authors: H. Abaali, T. Talbi, R.Skouri

Abstract:

This paper presents a comparative study of the Gauss Seidel and Newton-Raphson polar coordinates methods for power flow analysis. The effectiveness of these methods are evaluated and tested through a different IEEE bus test system on the basis of number of iteration, computational time, tolerance value and convergence.

Keywords: Convergence time, Gauss-Seidel Method, Newton-Raphson Method, number of iteration, power flow analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520