Search results for: Least Squares Error.
400 Speech Intelligibility Improvement Using Variable Level Decomposition DWT
Authors: Samba Raju, Chiluveru, Manoj Tripathy
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with Universal Discrete Wavelet Transform (DWT) thresholding and Minimum Mean Square Error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methodsKeywords: Discrete Wavelet Transform, speech intelligibility, STOI, standard deviation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693399 A Pole Radius Varying Notch Filter with Transient Suppression for Electrocardiogram
Authors: Ramesh Rajagopalan, Adam Dahlstrom
Abstract:
Noise removal techniques play a vital role in the performance of electrocardiographic (ECG) signal processing systems. ECG signals can be corrupted by various kinds of noise such as baseline wander noise, electromyographic interference, and powerline interference. One of the significant challenges in ECG signal processing is the degradation caused by additive 50 or 60 Hz powerline interference. This work investigates the removal of power line interference and suppression of transient response for filtering noise corrupted ECG signals. We demonstrate the effectiveness of infinite impulse response (IIR) notch filter with time varying pole radius for improving the transient behavior. The temporary change in the pole radius of the filter diminishes the transient behavior. Simulation results show that the proposed IIR filter with time varying pole radius outperforms traditional IIR notch filters in terms of mean square error and transient suppression.
Keywords: Notch filter, ECG, transient, pole radius.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3193398 Comparison between Haar and Daubechies Wavelet Transformions on FPGA Technology
Authors: Mohamed I. Mahmoud, Moawad I. M. Dessouky, Salah Deyab, Fatma H. Elfouly
Abstract:
Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the comparison between the Haar and Daubechies wavelets is investigated. The Bit Error Rat (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. It is seen that the BER using Daubechies wavelet techniques is less than Haar wavelet. The design procedure has been explained and designed using the stat-of-art Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.
Keywords: Daubechies wavelet, discrete wavelet transform, Haar wavelet, Xilinx FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4845397 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378396 An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon
Authors: Haniye Dehestani, Yadollah Ordokhani
Abstract:
In this work, we present an efficient approach for solving variable-order time-fractional partial differential equations, which are based on Legendre and Laguerre polynomials. First, we introduced the pseudo-operational matrices of integer and variable fractional order of integration by use of some properties of Riemann-Liouville fractional integral. Then, applied together with collocation method and Legendre-Laguerre functions for solving variable-order time-fractional partial differential equations. Also, an estimation of the error is presented. At last, we investigate numerical examples which arise in physics to demonstrate the accuracy of the present method. In comparison results obtained by the present method with the exact solution and the other methods reveals that the method is very effective.Keywords: Collocation method, fractional partial differential equations, Legendre-Laguerre functions, pseudo-operational matrix of integration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022395 Design of Encoding Calculator Software for Huffman and Shannon-Fano Algorithms
Authors: Wilson Chanhemo, Henry. R. Mgombelo, Omar F Hamad, T. Marwala
Abstract:
This paper presents a design of source encoding calculator software which applies the two famous algorithms in the field of information theory- the Shannon-Fano and the Huffman schemes. This design helps to easily realize the algorithms without going into a cumbersome, tedious and prone to error manual mechanism of encoding the signals during the transmission. The work describes the design of the software, how it works, comparison with related works, its efficiency, its usefulness in the field of information technology studies and the future prospects of the software to engineers, students, technicians and alike. The designed “Encodia" software has been developed, tested and found to meet the intended requirements. It is expected that this application will help students and teaching staff in their daily doing of information theory related tasks. The process is ongoing to modify this tool so that it can also be more intensely useful in research activities on source coding.Keywords: Coding techniques, Coding algorithms, Codingefficiency, Encodia, Encoding software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472394 A Simplified Single Correlator Rake Receiver for CDMA Communications
Authors: K. Murali Krishna, Abhijit Mitra, C. Ardil
Abstract:
This paper presents a single correlator RAKE receiver for direct sequence code division multiple access (DS-CDMA) systems. In conventional RAKE receivers, multiple correlators are used to despread the multipath signals and then to align and combine those signals in a later stage before making a bit decision. The simplified receiver structure presented here uses a single correlator and single code sequence generator to recover the multipaths. Modified Walsh- Hadamard codes are used here for data spreading that provides better uncorrelation properties for the multipath signals. The main advantage of this receiver structure is that it requires only a single correlator and a code generator in contrary to the conventional RAKE receiver concept with multiple correlators. It is shown in results that the proposed receiver achieves better bit error rates in comparison with the conventional one for more than one multipaths.
Keywords: RAKE receiver, Code division multiple access, ModifiedWalsh-Hadamard codes, Single correlator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3643393 Suitable Die Shaping for a Rectangular Shape Bottle by Application of FEM and AI Technique
Authors: N. Ploysook, R. Rugsaj, C. Suvanjumrat
Abstract:
The characteristic requirement for producing rectangular shape bottles was a uniform thickness of the plastic bottle wall. Die shaping was a good technique which controlled the wall thickness of bottles. An advance technology which was the finite element method (FEM) for blowing parison to be a rectangular shape bottle was conducted to reduce waste plastic from a trial and error method of a die shaping and parison control method. The artificial intelligent (AI) comprised of artificial neural network and genetic algorithm was selected to optimize the die gap shape from the FEM results. The application of AI technique could optimize the suitable die gap shape for the parison blow molding which did not depend on the parison control method to produce rectangular bottles with the uniform wall. Particularly, this application can be used with cheap blow molding machines without a parison controller therefore it will reduce cost of production in the bottle blow molding process.
Keywords: AI, bottle, die shaping, FEM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2620392 A Fuzzy Time Series Forecasting Model for Multi-Variate Forecasting Analysis with Fuzzy C-Means Clustering
Authors: Emrah Bulut, Okan Duru, Shigeru Yoshida
Abstract:
In this study, a fuzzy integrated logical forecasting method (FILF) is extended for multi-variate systems by using a vector autoregressive model. Fuzzy time series forecasting (FTSF) method was recently introduced by Song and Chissom [1]-[2] after that Chen improved the FTSF method. Rather than the existing literature, the proposed model is not only compared with the previous FTS models, but also with the conventional time series methods such as the classical vector autoregressive model. The cluster optimization is based on the C-means clustering method. An empirical study is performed for the prediction of the chartering rates of a group of dry bulk cargo ships. The root mean squared error (RMSE) metric is used for the comparing of results of methods and the proposed method has superiority than both traditional FTS methods and also the classical time series methods.
Keywords: C-means clustering, Fuzzy time series, Multi-variate design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2299391 A Usability Testing Approach to Evaluate User-Interfaces in Business Administration
Authors: Salaheddin Odeh, Ibrahim O. Adwan
Abstract:
This interdisciplinary study is an investigation to evaluate user-interfaces in business administration. The study is going to be implemented on two computerized business administration systems with two distinctive user-interfaces, so that differences between the two systems can be determined. Both systems, a commercial and a prototype developed for the purpose of this study, deal with ordering of supplies, tendering procedures, issuing purchase orders, controlling the movement of the stocks against their actual balances on the shelves and editing them on their tabulations. In the second suggested system, modern computer graphics and multimedia issues were taken into consideration to cover the drawbacks of the first system. To highlight differences between the two investigated systems regarding some chosen standard quality criteria, the study employs various statistical techniques and methods to evaluate the users- interaction with both systems. The study variables are divided into two divisions: independent representing the interfaces of the two systems, and dependent embracing efficiency, effectiveness, satisfaction, error rate etc.
Keywords: Evaluation and usability testing, software prototyping, statistical methods, user-interface design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465390 Energy Consumption and Economic Growth in South Asian Countries: A Co-integrated Panel Analysis
Authors: S. Noor, M. W. Siddiqi
Abstract:
This study examines causal link between energy use and economic growth for five South Asian countries over period 1971-2006. Panel cointegration, ECM and FMOLS are applied for short and long run estimates. In short run unidirectional causality from per capita GDP to per capita energy consumption is found, but not vice versa. In long run one percent increase in per capita energy consumption tend to decrease 0.13 percent per capita GDP. i.e. Energy use discourage economic growth. This short and long run relationship indicate energy shortage crisis in South Asia due to increased energy use coupled with insufficient energy supply. Beside this long run estimated coefficient of error term suggest that short term adjustment to equilibrium are driven by adjustment back to long run equilibrium. Moreover, per capita energy consumption is responsive to adjustment back to equilibrium and it takes 59 years approximately. It specifies long run feedback between both variables.
Keywords: Energy consumption, Income, Panel co-integration, Causality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3313389 A Prediction-Based Reversible Watermarking for MRI Images
Authors: Nuha Omran Abokhdair, Azizah Bt Abdul Manaf
Abstract:
Reversible watermarking is a special branch of image watermarking, that is able to recover the original image after extracting the watermark from the image. In this paper, an adaptive prediction-based reversible watermarking scheme is presented, in order to increase the payload capacity of MRI medical images. The scheme divides the image into two parts, Region of Interest (ROI) and Region of Non-Interest (RONI). Two bits are embedded in each embeddable pixel of RONI and one bit is embedded in each embeddable pixel of ROI. The experimental results demonstrate that the proposed scheme is able to achieve high embedding capacity. This is mainly caused by two reasons. First, the pixels that were excluded from data embedding due to overflow/underflow are used for data embedding. Second, large location map that need to be added to watermark data as overhead is eliminated and thus lower data embedding capacity is prevented. Moreover, the scheme provides good visual quality to the watermarked image.
Keywords: Medical image watermarking, reversible watermarking, Difference Expansion, Prediction-Error Expansion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916388 PIIN Suppression Using Random Diagonal Code for Spectral Amplitude Coding Optical CDMA System
Authors: Hilal Adnan Fadhil, Syed Alwei, R. Badlishah Ahmad
Abstract:
A new code for spectral-amplitude coding optical code-division multiple-access system is proposed called Random diagonal (RD) code. This code is constructed using code segment and data segment. One of the important properties of this code is that the cross correlation at data segment is always zero, which means that Phase Intensity Induced Noise (PIIN) is reduced. For the performance analysis, the effects of phase-induced intensity noise, shot noise, and thermal noise are considered simultaneously. Bit-error rate (BER) performance is compared with Hadamard and Modified Frequency Hopping (MFH) codes. It is shown that the system using this new code matrices not only suppress PIIN, but also allows larger number of active users compare with other codes. Simulation results shown that using point to point transmission with three encoded channels, RD code has better BER performance than other codes, also its found that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH respectively.Keywords: OCDMA, MFH, PIIN, and BER.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1787387 A Novel Low Power Very Low Voltage High Performance Current Mirror
Authors: Khalil Monfaredi, Hassan Faraji Baghtash, Majid Abbasi
Abstract:
In this paper a novel high output impedance, low input impedance, wide bandwidth, very simple current mirror with input and output voltage requirements less than that of a simple current mirror is presented. These features are achieved with very simple structure avoiding extra large node impedances to ensure high bandwidth operation. The circuit's principle of operation is discussed and compared to simple and low voltage cascode (LVC) current mirrors. Such outstanding features of this current mirror as high output impedance ~384K, low input impedance~6.4, wide bandwidth~178MHz, low input voltage ~ 362mV, low output voltage ~ 38mV and low current transfer error ~4% (all at 50μA) makes it an outstanding choice for high performance applications. Simulation results in BSIM 0.35μm CMOS technology with HSPICE are given in comparison with simple, and LVC current mirrors to verify and validate the performance of the proposed current mirror.
Keywords: Analog circuits, Current mirror, high frequency, Low power, Low voltage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3057386 Range-Free Localization Schemes for Wireless Sensor Networks
Authors: R. Khadim, M. Erritali, A. Maaden
Abstract:
Localization of nodes is one of the key issues of Wireless Sensor Network (WSN) that gained a wide attention in recent years. The existing localization techniques can be generally categorized into two types: range-based and range-free. Compared with rang-based schemes, the range-free schemes are more costeffective, because no additional ranging devices are needed. As a result, we focus our research on the range-free schemes. In this paper we study three types of range-free location algorithms to compare the localization error and energy consumption of each one. Centroid algorithm requires a normal node has at least three neighbor anchors, while DV-hop algorithm doesn’t have this requirement. The third studied algorithm is the amorphous algorithm similar to DV-Hop algorithm, and the idea is to calculate the hop distance between two nodes instead of the linear distance between them. The simulation results show that the localization accuracy of the amorphous algorithm is higher than that of other algorithms and the energy consumption does not increase too much.Keywords: Wireless Sensor Networks, Node Localization, Centroid Algorithm, DV–Hop Algorithm, Amorphous Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631385 A Comparison of Real Valued Transforms for Image Compression
Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori
Abstract:
In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1499384 PID Control Design Based on Genetic Algorithm with Integrator Anti-Windup for Automatic Voltage Regulator and Speed Governor of Brushless Synchronous Generator
Authors: O. S. Ebrahim, M. A. Badr, Kh. H. Gharib, H. K. Temraz
Abstract:
This paper presents a methodology based on genetic algorithm (GA) to tune the parameters of proportional-integral-differential (PID) controllers utilized in the automatic voltage regulator (AVR) and speed governor of a brushless synchronous generator driven by three-stage steam turbine. The parameter tuning is represented as a nonlinear optimization problem solved by GA to minimize the integral of absolute error (IAE). The problem of integral windup due to physical system limitations is solved using simple anti-windup scheme. The obtained controllers are compared to those designed using classical Ziegler-Nichols technique and constrained optimization. Results show distinct superiority of the proposed method.
Keywords: Brushless synchronous generator, Genetic Algorithm, GA, Proportional-Integral-Differential control, PID control, automatic voltage regulator, AVR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 295383 Design of a Pneumonia Ontology for Diagnosis Decision Support System
Authors: Sabrina Azzi, Michal Iglewski, Véronique Nabelsi
Abstract:
Diagnosis error problem is frequent and one of the most important safety problems today. One of the main objectives of our work is to propose an ontological representation that takes into account the diagnostic criteria in order to improve the diagnostic. We choose pneumonia disease since it is one of the frequent diseases affected by diagnosis errors and have harmful effects on patients. To achieve our aim, we use a semi-automated method to integrate diverse knowledge sources that include publically available pneumonia disease guidelines from international repositories, biomedical ontologies and electronic health records. We follow the principles of the Open Biomedical Ontologies (OBO) Foundry. The resulting ontology covers symptoms and signs, all the types of pneumonia, antecedents, pathogens, and diagnostic testing. The first evaluation results show that most of the terms are covered by the ontology. This work is still in progress and represents a first and major step toward a development of a diagnosis decision support system for pneumonia.
Keywords: Clinical decision support system, diagnostic errors, ontology, pneumonia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882382 Digital Redesign of Interval Systems via Particle Swarm Optimization
Authors: Chen-Chien Hsu, Chun-Hui Gao
Abstract:
In this paper, a PSO-based approach is proposed to derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital system and its continuous counterpart, and subsequently optimized by a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.Keywords: Digital redesign, Extremal systems, Particle swarm optimization, Uncertain interval systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275381 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.
Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 232380 Identification of Optimum Parameters of Deep Drawing of a Cylindrical Workpiece using Neural Network and Genetic Algorithm
Authors: D. Singh, R. Yousefi, M. Boroushaki
Abstract:
Intelligent deep-drawing is an instrumental research field in sheet metal forming. A set of 28 different experimental data have been employed in this paper, investigating the roles of die radius, punch radius, friction coefficients and drawing ratios for axisymmetric workpieces deep drawing. This paper focuses an evolutionary neural network, specifically, error back propagation in collaboration with genetic algorithm. The neural network encompasses a number of different functional nodes defined through the established principles. The input parameters, i.e., punch radii, die radii, friction coefficients and drawing ratios are set to the network; thereafter, the material outputs at two critical points are accurately calculated. The output of the network is used to establish the best parameters leading to the most uniform thickness in the product via the genetic algorithm. This research achieved satisfactory results based on demonstration of neural networks.
Keywords: Deep-drawing, Neural network, Genetic algorithm, Sheet metal forming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202379 Granger Causal Nexus between Financial Development and Energy Consumption: Evidence from Cross Country Panel Data
Authors: Rudra P. Pradhan
Abstract:
This paper examines the Granger causal nexus between financial development and energy consumption in the group of 35 Financial Action Task Force (FATF) Countries over the period 1988-2012. The study uses two financial development indicators such as private sector credit and stock market capitalization and seven energy consumption indicators such as coal, oil, gas, electricity, hydro-electrical, nuclear and biomass. Using panel cointegration tests, the study finds that financial development and energy consumption are cointegrated, indicating the presence of a long-run relationship between the two. Using a panel vector error correction model (VECM), the study detects both bidirectional and unidirectional causality between financial development and energy consumption. The variation of this causality is due to the use of different proxies for both financial development and energy consumption. The policy implication of this study is that economic policies should recognize the differences in the financial development-energy consumption nexus in order to maintain sustainable development in the selected 35 FATF countries.Keywords: Financial development, energy consumption, Panel VECM, FATF countries.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513378 A Comparison and Analysis of Name Matching Algorithms
Authors: Chakkrit Snae
Abstract:
Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.Keywords: Data mining, name matching algorithm, nominaldata, searching system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11090377 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)
Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey
Abstract:
Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH-were prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen- Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were welldescribed by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.
Keywords: Anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2392376 Electrical Impedance Imaging Using Eddy Current
Authors: A. Ambia, T. Takemae, Y. Kosugi, M. Hongo
Abstract:
Electric impedance imaging is a method of reconstructing spatial distribution of electrical conductivity inside a subject. In this paper, a new method of electrical impedance imaging using eddy current is proposed. The eddy current distribution in the body depends on the conductivity distribution and the magnetic field pattern. By changing the position of magnetic core, a set of voltage differences is measured with a pair of electrodes. This set of voltage differences is used in image reconstruction of conductivity distribution. The least square error minimization method is used as a reconstruction algorithm. The back projection algorithm is used to get two dimensional images. Based on this principle, a measurement system is developed and some model experiments were performed with a saline filled phantom. The shape of each model in the reconstructed image is similar to the corresponding model, respectively. From the results of these experiments, it is confirmed that the proposed method is applicable in the realization of electrical imaging.Keywords: Back projection algorithm, electrical impedancetomography, eddy current, magnetic inductance tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696375 Gain Tuning Fuzzy Controller for an Optical Disk Drive
Authors: Shiuh-Jer Huang, Ming-Tien Su
Abstract:
Since the driving speed and control accuracy of commercial optical disk are increasing significantly, it needs an efficient controller to monitor the track seeking and following operations of the servo system for achieving the desired data extracting response. The nonlinear behaviors of the actuator and servo system of the optical disk drive will influence the laser spot positioning. Here, the model-free fuzzy control scheme is employed to design the track seeking servo controller for a d.c. motor driving optical disk drive system. In addition, the sliding model control strategy is introduced into the fuzzy control structure to construct a 1-D adaptive fuzzy rule intelligent controller for simplifying the implementation problem and improving the control performance. The experimental results show that the steady state error of the track seeking by using this fuzzy controller can maintain within the track width (1.6 μm ). It can be used in the track seeking and track following servo control operations.Keywords: Fuzzy control, gain tuning and optical disk drive.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1587374 Solution of Density Dependent Nonlinear Reaction-Diffusion Equation Using Differential Quadrature Method
Authors: Gülnihal Meral
Abstract:
In this study, the density dependent nonlinear reactiondiffusion equation, which arises in the insect dispersal models, is solved using the combined application of differential quadrature method(DQM) and implicit Euler method. The polynomial based DQM is used to discretize the spatial derivatives of the problem. The resulting time-dependent nonlinear system of ordinary differential equations(ODE-s) is solved by using implicit Euler method. The computations are carried out for a Cauchy problem defined by a onedimensional density dependent nonlinear reaction-diffusion equation which has an exact solution. The DQM solution is found to be in a very good agreement with the exact solution in terms of maximum absolute error. The DQM solution exhibits superior accuracy at large time levels tending to steady-state. Furthermore, using an implicit method in the solution procedure leads to stable solutions and larger time steps could be used.Keywords: Density Dependent Nonlinear Reaction-Diffusion Equation, Differential Quadrature Method, Implicit Euler Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2273373 A Study of Islamic Stock Indices and Macroeconomic Variables
Authors: Mohammad Irfan
Abstract:
The purpose of this paper is to investigate the relationship among the key macroeconomic variables and Islamic stock market in India. This study is based on the time series data of financial years 2009-2015 to explore the consistency of relationship between macroeconomic variables and Shariah Indices. The ADF (Augmented Dickey–Fuller Test Statistic) and PP (Phillips–Perron Test Statistic) tests are employed to check stationarity of the data. The study depicts the long run relationship between Shariah indices and macroeconomic variables by using the Johansen Co-integration test. BSE Shariah and Nifty Shariah have uni-direct Granger causality. The outcome of VECM is significantly confirming the applicability of best fitted model. Thus, Islamic stock indices are proficiently working for the development of Indian economy. It suggests that by keeping eyes on Islamic stock market which will be more interactive in the future with other macroeconomic variables.Keywords: Indian shariah indices, macroeconomic variables, co-integration, Granger causality, Vector error correction model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218372 Small Signal Stability Enhancement for Hybrid Power Systems by SVC
Authors: Ali Dehghani, Mojtaba Hakimzadeh, Amir Habibi, Navid Mehdizadeh Afroozi
Abstract:
In this paper an isolated wind-diesel hybrid power system has been considered for reactive power control study having an induction generator for wind power conversion and synchronous alternator with automatic voltage regulator (AVR) for diesel unit is presented. The dynamic voltage stability evaluation is dependent on small signal analysis considering a Static VAR Compensator (SVC) and IEEE type -I excitation system. It's shown that the variable reactive power source like SVC is crucial to meet the varying demand of reactive power by induction generator and load and to acquire an excellent voltage regulation of the system with minimum fluctuations. Integral square error (ISE) criterion can be used to evaluate the optimum setting of gain parameters. Finally the dynamic responses of the power systems considered with optimum gain setting will also be presented.
Keywords: SVC, Small Signal Stability, Reactive Power, Control, Hybrid System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458371 A Detection Method of Faults in Railway Pantographs Based on Dynamic Phase Plots
Authors: G. Santamato, M. Solazzi, A. Frisoli
Abstract:
Systems for detection of damages in railway pantographs effectively reduce the cost of maintenance and improve time scheduling. In this paper, we present an approach to design a monitoring tool fitting strong customer requirements such as portability and ease of use. Pantograph has been modeled to estimate its dynamical properties, since no data are available. With the aim to focus on suspensions health, a two Degrees of Freedom (DOF) scheme has been adopted. Parameters have been calculated by means of analytical dynamics. A Finite Element Method (FEM) modal analysis verified the former model with an acceptable error. The detection strategy seeks phase-plots topology alteration, induced by defects. In order to test the suitability of the method, leakage in the dashpot was simulated on the lumped model. Results are interesting because changes in phase plots are more appreciable than frequency-shift. Further calculations as well as experimental tests will support future developments of this smart strategy.Keywords: Pantograph models, phase-plots, structural health monitoring, vibration-based condition monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486