Search results for: attribution error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1258

Search results for: attribution error

418 Investigation of Roll off Factor in Pulse Shaping Filter on Maximal Ratio Combining for CDMA 2000 System

Authors: G. S. Walia, H. P. Singh, Padma D.

Abstract:

The integration of wide variety of communication services is made possible with invention of 3G technology. Code Division Multiple Access 2000 operates on various RF channel bandwidths 1.2288 or 3.6864 Mcps (1x or 3x systems). It is a 3G system which offers high bandwidth and wireless broadband services but its efficiency is lowered due to various factors like fading, interference, scattering, absorption etc. This paper investigates the effect of diversity (MRC), roll off factor in Root Raised Cosine (RRC) filter for the BPSK and QPSK modulation schemes. It is possible to transmit data with minimum Inter symbol Interference and within limited bandwidth with proper pulse shaping technique. Bit error rate (BER) performance is analyzed by applying diversity technique by varying the roll off factor for BPSK and QPSK. Roll off factor reduces the ISI and diversity reduces the Fading.

Keywords: CDMA2000, Diversity, Root Raised Cosine, Roll off factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3111
417 Face Recognition Based On Vector Quantization Using Fuzzy Neuro Clustering

Authors: Elizabeth B. Varghese, M. Wilscy

Abstract:

A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.

Keywords: Face Recognition, Vector Quantization, Integrated Adaptive Fuzzy Clustering, Self Organization Map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
416 MOSFET Based ADC for Accurate Positioning of Control Valves in Industry

Authors: K. Diwakar, N. Vasudevan, C. Senthilpari

Abstract:

This paper presents MOSFET based analog to digital converter which is simple in design, has high resolution, and conversion rate better than dual slope ADC. It has no DAC which will limit the performance, no error in conversion, can operate for wide range of inputs and never become unstable. One of the industrial applications, where the proposed high resolution MOSFET ADC can be used is, for the positioning of control valves in a multi channel data acquisition and control system (DACS), using stepper motors as actuators of control valves. It is observed that in a DACS having ten control valves, 0.02% of positional accuracy of control valves can be achieved with the data update period of 250ms and with stepper motors of maximum pulse rate 20 Kpulses per sec. and minimum pulse width of 2.5 μsec. The reported accuracy so far by other authors is 0.2%, with update period of 255 ms and with 8 bit DAC. The accuracy in the proposed configuration is limited by the available precision stepper motor and not by the MOSFET based ADC.

Keywords: MOSFET based ADC, Actuators, Positional accuracy, Stepper Motors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2612
415 Adaptive Noise Reduction Algorithm for Speech Enhancement

Authors: M. Kalamani, S. Valarmathy, M. Krishnamoorthi

Abstract:

In this paper, Least Mean Square (LMS) adaptive noise reduction algorithm is proposed to enhance the speech signal from the noisy speech. In this, the speech signal is enhanced by varying the step size as the function of the input signal. Objective and subjective measures are made under various noises for the proposed and existing algorithms. From the experimental results, it is seen that the proposed LMS adaptive noise reduction algorithm reduces Mean square Error (MSE) and Log Spectral Distance (LSD) as compared to that of the earlier methods under various noise conditions with different input SNR levels. In addition, the proposed algorithm increases the Peak Signal to Noise Ratio (PSNR) and Segmental SNR improvement (ΔSNRseg) values; improves the Mean Opinion Score (MOS) as compared to that of the various existing LMS adaptive noise reduction algorithms. From these experimental results, it is observed that the proposed LMS adaptive noise reduction algorithm reduces the speech distortion and residual noise as compared to that of the existing methods.

Keywords: LMS, speech enhancement, speech quality, residual noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2806
414 Analysis of Joint Source Channel LDPC Coding for Correlated Sources Transmission over Noisy Channels

Authors: Marwa Ben Abdessalem, Amin Zribi, Ammar Bouallègue

Abstract:

In this paper, a Joint Source Channel coding scheme based on LDPC codes is investigated. We consider two concatenated LDPC codes, one allows to compress a correlated source and the second to protect it against channel degradations. The original information can be reconstructed at the receiver by a joint decoder, where the source decoder and the channel decoder run in parallel by transferring extrinsic information. We investigate the performance of the JSC LDPC code in terms of Bit-Error Rate (BER) in the case of transmission over an Additive White Gaussian Noise (AWGN) channel, and for different source and channel rate parameters. We emphasize how JSC LDPC presents a performance tradeoff depending on the channel state and on the source correlation. We show that, the JSC LDPC is an efficient solution for a relatively low Signal-to-Noise Ratio (SNR) channel, especially with highly correlated sources. Finally, a source-channel rate optimization has to be applied to guarantee the best JSC LDPC system performance for a given channel.

Keywords: AWGN channel, belief propagation, joint source channel coding, LDPC codes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 984
413 An Improved Conjugate Gradient Based Learning Algorithm for Back Propagation Neural Networks

Authors: N. M. Nawi, R. S. Ransing, M. R. Ransing

Abstract:

The conjugate gradient optimization algorithm is combined with the modified back propagation algorithm to yield a computationally efficient algorithm for training multilayer perceptron (MLP) networks (CGFR/AG). The computational efficiency is enhanced by adaptively modifying initial search direction as described in the following steps: (1) Modification on standard back propagation algorithm by introducing a gain variation term in the activation function, (2) Calculation of the gradient descent of error with respect to the weights and gains values and (3) the determination of a new search direction by using information calculated in step (2). The performance of the proposed method is demonstrated by comparing accuracy and computation time with the conjugate gradient algorithm used in MATLAB neural network toolbox. The results show that the computational efficiency of the proposed method was better than the standard conjugate gradient algorithm.

Keywords: Adaptive gain variation, back-propagation, activation function, conjugate gradient, search direction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
412 Speech Intelligibility Improvement Using Variable Level Decomposition DWT

Authors: Samba Raju, Chiluveru, Manoj Tripathy

Abstract:

Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods

Keywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
411 A Pole Radius Varying Notch Filter with Transient Suppression for Electrocardiogram

Authors: Ramesh Rajagopalan, Adam Dahlstrom

Abstract:

Noise removal techniques play a vital role in the performance of electrocardiographic (ECG) signal processing systems. ECG signals can be corrupted by various kinds of noise such as baseline wander noise, electromyographic interference, and powerline interference. One of the significant challenges in ECG signal processing is the degradation caused by additive 50 or 60 Hz powerline interference. This work investigates the removal of power line interference and suppression of transient response for filtering noise corrupted ECG signals. We demonstrate the effectiveness of infinite impulse response (IIR) notch filter with time varying pole radius for improving the transient behavior. The temporary change in the pole radius of the filter diminishes the transient behavior. Simulation results show that the proposed IIR filter with time varying pole radius outperforms traditional IIR notch filters in terms of mean square error and transient suppression.

Keywords: Notch filter, ECG, transient, pole radius.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3194
410 Comparison between Haar and Daubechies Wavelet Transformions on FPGA Technology

Authors: Mohamed I. Mahmoud, Moawad I. M. Dessouky, Salah Deyab, Fatma H. Elfouly

Abstract:

Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the comparison between the Haar and Daubechies wavelets is investigated. The Bit Error Rat (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. It is seen that the BER using Daubechies wavelet techniques is less than Haar wavelet. The design procedure has been explained and designed using the stat-of-art Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.

Keywords: Daubechies wavelet, discrete wavelet transform, Haar wavelet, Xilinx FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4849
409 Estimating Shortest Circuit Path Length Complexity

Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake

Abstract:

When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.

Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379
408 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon

Authors: Haniye Dehestani, Yadollah Ordokhani

Abstract:

In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.

Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
407 Design of Encoding Calculator Software for Huffman and Shannon-Fano Algorithms

Authors: Wilson Chanhemo, Henry. R. Mgombelo, Omar F Hamad, T. Marwala

Abstract:

This paper presents a design of source encoding calculator software which applies the two famous algorithms in the field of information theory- the Shannon-Fano and the Huffman schemes. This design helps to easily realize the algorithms without going into a cumbersome, tedious and prone to error manual mechanism of encoding the signals during the transmission. The work describes the design of the software, how it works, comparison with related works, its efficiency, its usefulness in the field of information technology studies and the future prospects of the software to engineers, students, technicians and alike. The designed “Encodia" software has been developed, tested and found to meet the intended requirements. It is expected that this application will help students and teaching staff in their daily doing of information theory related tasks. The process is ongoing to modify this tool so that it can also be more intensely useful in research activities on source coding.

Keywords: Coding techniques, Coding algorithms, Codingefficiency, Encodia, Encoding software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3474
406 A Simplified Single Correlator Rake Receiver for CDMA Communications

Authors: K. Murali Krishna, Abhijit Mitra, C. Ardil

Abstract:

This paper presents a single correlator RAKE receiver for direct sequence code division multiple access (DS-CDMA) systems. In conventional RAKE receivers, multiple correlators are used to despread the multipath signals and then to align and combine those signals in a later stage before making a bit decision. The simplified receiver structure presented here uses a single correlator and single code sequence generator to recover the multipaths. Modified Walsh- Hadamard codes are used here for data spreading that provides better uncorrelation properties for the multipath signals. The main advantage of this receiver structure is that it requires only a single correlator and a code generator in contrary to the conventional RAKE receiver concept with multiple correlators. It is shown in results that the proposed receiver achieves better bit error rates in comparison with the conventional one for more than one multipaths.

Keywords: RAKE receiver, Code division multiple access, ModifiedWalsh-Hadamard codes, Single correlator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3644
405 Suitable Die Shaping for a Rectangular Shape Bottle by Application of FEM and AI Technique

Authors: N. Ploysook, R. Rugsaj, C. Suvanjumrat

Abstract:

The characteristic requirement for producing rectangular shape bottles was a uniform thickness of the plastic bottle wall. Die shaping was a good technique which controlled the wall thickness of bottles. An advance technology which was the finite element method (FEM) for blowing parison to be a rectangular shape bottle was conducted to reduce waste plastic from a trial and error method of a die shaping and parison control method. The artificial intelligent (AI) comprised of artificial neural network and genetic algorithm was selected to optimize the die gap shape from the FEM results. The application of AI technique could optimize the suitable die gap shape for the parison blow molding which did not depend on the parison control method to produce rectangular bottles with the uniform wall. Particularly, this application can be used with cheap blow molding machines without a parison controller therefore it will reduce cost of production in the bottle blow molding process.

Keywords: AI, bottle, die shaping, FEM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2623
404 A Fuzzy Time Series Forecasting Model for Multi-Variate Forecasting Analysis with Fuzzy C-Means Clustering

Authors: Emrah Bulut, Okan Duru, Shigeru Yoshida

Abstract:

In this study, a fuzzy integrated logical forecasting method (FILF) is extended for multi-variate systems by using a vector autoregressive model. Fuzzy time series forecasting (FTSF) method was recently introduced by Song and Chissom [1]-[2] after that Chen improved the FTSF method. Rather than the existing literature, the proposed model is not only compared with the previous FTS models, but also with the conventional time series methods such as the classical vector autoregressive model. The cluster optimization is based on the C-means clustering method. An empirical study is performed for the prediction of the chartering rates of a group of dry bulk cargo ships. The root mean squared error (RMSE) metric is used for the comparing of results of methods and the proposed method has superiority than both traditional FTS methods and also the classical time series methods.

Keywords: C-means clustering, Fuzzy time series, Multi-variate design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2303
403 A Usability Testing Approach to Evaluate User-Interfaces in Business Administration

Authors: Salaheddin Odeh, Ibrahim O. Adwan

Abstract:

This interdisciplinary study is an investigation to evaluate user-interfaces in business administration. The study is going to be implemented on two computerized business administration systems with two distinctive user-interfaces, so that differences between the two systems can be determined. Both systems, a commercial and a prototype developed for the purpose of this study, deal with ordering of supplies, tendering procedures, issuing purchase orders, controlling the movement of the stocks against their actual balances on the shelves and editing them on their tabulations. In the second suggested system, modern computer graphics and multimedia issues were taken into consideration to cover the drawbacks of the first system. To highlight differences between the two investigated systems regarding some chosen standard quality criteria, the study employs various statistical techniques and methods to evaluate the users- interaction with both systems. The study variables are divided into two divisions: independent representing the interfaces of the two systems, and dependent embracing efficiency, effectiveness, satisfaction, error rate etc.

Keywords: Evaluation and usability testing, software prototyping, statistical methods, user-interface design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
402 Energy Consumption and Economic Growth in South Asian Countries: A Co-integrated Panel Analysis

Authors: S. Noor, M. W. Siddiqi

Abstract:

This study examines causal link between energy use and economic growth for five South Asian countries over period 1971-2006. Panel cointegration, ECM and FMOLS are applied for short and long run estimates. In short run unidirectional causality from per capita GDP to per capita energy consumption is found, but not vice versa. In long run one percent increase in per capita energy consumption tend to decrease 0.13 percent per capita GDP. i.e. Energy use discourage economic growth. This short and long run relationship indicate energy shortage crisis in South Asia due to increased energy use coupled with insufficient energy supply. Beside this long run estimated coefficient of error term suggest that short term adjustment to equilibrium are driven by adjustment back to long run equilibrium. Moreover, per capita energy consumption is responsive to adjustment back to equilibrium and it takes 59 years approximately. It specifies long run feedback between both variables.

Keywords: Energy consumption, Income, Panel co-integration, Causality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3315
401 A Prediction-Based Reversible Watermarking for MRI Images

Authors: Nuha Omran Abokhdair, Azizah Bt Abdul Manaf

Abstract:

Reversible watermarking is a special branch of image watermarking, that is able to recover the original image after extracting the watermark from the image. In this paper, an adaptive prediction-based reversible watermarking scheme is presented, in order to increase the payload capacity of MRI medical images. The scheme divides the image into two parts, Region of Interest (ROI) and Region of Non-Interest (RONI). Two bits are embedded in each embeddable pixel of RONI and one bit is embedded in each embeddable pixel of ROI. The experimental results demonstrate that the proposed scheme is able to achieve high embedding capacity. This is mainly caused by two reasons. First, the pixels that were excluded from data embedding due to overflow/underflow are used for data embedding. Second, large location map that need to be added to watermark data as overhead is eliminated and thus lower data embedding capacity is prevented. Moreover, the scheme provides good visual quality to the watermarked image.

Keywords: Medical image watermarking, reversible watermarking, Difference Expansion, Prediction-Error Expansion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
400 PIIN Suppression Using Random Diagonal Code for Spectral Amplitude Coding Optical CDMA System

Authors: Hilal Adnan Fadhil, Syed Alwei, R. Badlishah Ahmad

Abstract:

A new code for spectral-amplitude coding optical code-division multiple-access system is proposed called Random diagonal (RD) code. This code is constructed using code segment and data segment. One of the important properties of this code is that the cross correlation at data segment is always zero, which means that Phase Intensity Induced Noise (PIIN) is reduced. For the performance analysis, the effects of phase-induced intensity noise, shot noise, and thermal noise are considered simultaneously. Bit-error rate (BER) performance is compared with Hadamard and Modified Frequency Hopping (MFH) codes. It is shown that the system using this new code matrices not only suppress PIIN, but also allows larger number of active users compare with other codes. Simulation results shown that using point to point transmission with three encoded channels, RD code has better BER performance than other codes, also its found that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH respectively.

Keywords: OCDMA, MFH, PIIN, and BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
399 A Novel Low Power Very Low Voltage High Performance Current Mirror

Authors: Khalil Monfaredi, Hassan Faraji Baghtash, Majid Abbasi

Abstract:

In this paper a novel high output impedance, low input impedance, wide bandwidth, very simple current mirror with input and output voltage requirements less than that of a simple current mirror is presented. These features are achieved with very simple structure avoiding extra large node impedances to ensure high bandwidth operation. The circuit's principle of operation is discussed and compared to simple and low voltage cascode (LVC) current mirrors. Such outstanding features of this current mirror as high output impedance ~384K, low input impedance~6.4, wide bandwidth~178MHz, low input voltage ~ 362mV, low output voltage ~ 38mV and low current transfer error ~4% (all at 50μA) makes it an outstanding choice for high performance applications. Simulation results in BSIM 0.35μm CMOS technology with HSPICE are given in comparison with simple, and LVC current mirrors to verify and validate the performance of the proposed current mirror.

Keywords: Analog circuits, Current mirror, high frequency, Low power, Low voltage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3058
398 Range-Free Localization Schemes for Wireless Sensor Networks

Authors: R. Khadim, M. Erritali, A. Maaden

Abstract:

Localization of nodes is one of the key issues of Wireless Sensor Network (WSN) that gained a wide attention in recent years. The existing localization techniques can be generally categorized into two types: range-based and range-free. Compared with rang-based schemes, the range-free schemes are more costeffective, because no additional ranging devices are needed. As a result, we focus our research on the range-free schemes. In this paper we study three types of range-free location algorithms to compare the localization error and energy consumption of each one. Centroid algorithm requires a normal node has at least three neighbor anchors, while DV-hop algorithm doesn’t have this requirement. The third studied algorithm is the amorphous algorithm similar to DV-Hop algorithm, and the idea is to calculate the hop distance between two nodes instead of the linear distance between them. The simulation results show that the localization accuracy of the amorphous algorithm is higher than that of other algorithms and the energy consumption does not increase too much.

Keywords: Wireless Sensor Networks, Node Localization, Centroid Algorithm, DV–Hop Algorithm, Amorphous Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2632
397 A Comparison of Real Valued Transforms for Image Compression

Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori

Abstract:

In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.

Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
396 PID Control Design Based on Genetic Algorithm with Integrator Anti-Windup for Automatic Voltage Regulator and Speed Governor of Brushless Synchronous Generator

Authors: O. S. Ebrahim, M. A. Badr, Kh. H. Gharib, H. K. Temraz

Abstract:

This paper presents a methodology based on genetic algorithm (GA) to tune the parameters of proportional-integral-differential (PID) controllers utilized in the automatic voltage regulator (AVR) and speed governor of a brushless synchronous generator driven by three-stage steam turbine. The parameter tuning is represented as a nonlinear optimization problem solved by GA to minimize the integral of absolute error (IAE). The problem of integral windup due to physical system limitations is solved using simple anti-windup scheme. The obtained controllers are compared to those designed using classical Ziegler-Nichols technique and constrained optimization. Results show distinct superiority of the proposed method.

Keywords: Brushless synchronous generator, Genetic Algorithm, GA, Proportional-Integral-Differential control, PID control, automatic voltage regulator, AVR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302
395 Design of a Pneumonia Ontology for Diagnosis Decision Support System

Authors: Sabrina Azzi, Michal Iglewski, Véronique Nabelsi

Abstract:

Diagnosis error problem is frequent and one of the most important safety problems today. One of the main objectives of our work is to propose an ontological representation that takes into account the diagnostic criteria in order to improve the diagnostic. We choose pneumonia disease since it is one of the frequent diseases affected by diagnosis errors and have harmful effects on patients. To achieve our aim, we use a semi-automated method to integrate diverse knowledge sources that include publically available pneumonia disease guidelines from international repositories, biomedical ontologies and electronic health records. We follow the principles of the Open Biomedical Ontologies (OBO) Foundry. The resulting ontology covers symptoms and signs, all the types of pneumonia, antecedents, pathogens, and diagnostic testing. The first evaluation results show that most of the terms are covered by the ontology. This work is still in progress and represents a first and major step toward a development of a diagnosis decision support system for pneumonia.

Keywords: Clinical decision support system, diagnostic errors, ontology, pneumonia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883
394 Digital Redesign of Interval Systems via Particle Swarm Optimization

Authors: Chen-Chien Hsu, Chun-Hui Gao

Abstract:

In this paper, a PSO-based approach is proposed to derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital system and its continuous counterpart, and subsequently optimized by a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.

Keywords: Digital redesign, Extremal systems, Particle swarm optimization, Uncertain interval systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1276
393 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.

Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 234
392 Identification of Optimum Parameters of Deep Drawing of a Cylindrical Workpiece using Neural Network and Genetic Algorithm

Authors: D. Singh, R. Yousefi, M. Boroushaki

Abstract:

Intelligent deep-drawing is an instrumental research field in sheet metal forming. A set of 28 different experimental data have been employed in this paper, investigating the roles of die radius, punch radius, friction coefficients and drawing ratios for axisymmetric workpieces deep drawing. This paper focuses an evolutionary neural network, specifically, error back propagation in collaboration with genetic algorithm. The neural network encompasses a number of different functional nodes defined through the established principles. The input parameters, i.e., punch radii, die radii, friction coefficients and drawing ratios are set to the network; thereafter, the material outputs at two critical points are accurately calculated. The output of the network is used to establish the best parameters leading to the most uniform thickness in the product via the genetic algorithm. This research achieved satisfactory results based on demonstration of neural networks.

Keywords: Deep-drawing, Neural network, Genetic algorithm, Sheet metal forming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
391 Granger Causal Nexus between Financial Development and Energy Consumption: Evidence from Cross Country Panel Data

Authors: Rudra P. Pradhan

Abstract:

This paper examines the Granger causal nexus between financial development and energy consumption in the group of 35 Financial Action Task Force (FATF) Countries over the period 1988-2012. The study uses two financial development indicators such as private sector credit and stock market capitalization and seven energy consumption indicators such as coal, oil, gas, electricity, hydro-electrical, nuclear and biomass. Using panel cointegration tests, the study finds that financial development and energy consumption are cointegrated, indicating the presence of a long-run relationship between the two. Using a panel vector error correction model (VECM), the study detects both bidirectional and unidirectional causality between financial development and energy consumption. The variation of this causality is due to the use of different proxies for both financial development and energy consumption. The policy implication of this study is that economic policies should recognize the differences in the financial development-energy consumption nexus in order to maintain sustainable development in the selected 35 FATF countries.

Keywords: Financial development, energy consumption, Panel VECM, FATF countries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
390 A Comparison and Analysis of Name Matching Algorithms

Authors: Chakkrit Snae

Abstract:

Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.

Keywords: Data mining, name matching algorithm, nominaldata, searching system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11091
389 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)

Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey

Abstract:

Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH-were prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen- Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were welldescribed by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.

Keywords: Anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2393