Search results for: error information
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5018

Search results for: error information

3488 Range-Free Localization Schemes for Wireless Sensor Networks

Authors: R. Khadim, M. Erritali, A. Maaden

Abstract:

Localization of nodes is one of the key issues of Wireless Sensor Network (WSN) that gained a wide attention in recent years. The existing localization techniques can be generally categorized into two types: range-based and range-free. Compared with rang-based schemes, the range-free schemes are more costeffective, because no additional ranging devices are needed. As a result, we focus our research on the range-free schemes. In this paper we study three types of range-free location algorithms to compare the localization error and energy consumption of each one. Centroid algorithm requires a normal node has at least three neighbor anchors, while DV-hop algorithm doesn’t have this requirement. The third studied algorithm is the amorphous algorithm similar to DV-Hop algorithm, and the idea is to calculate the hop distance between two nodes instead of the linear distance between them. The simulation results show that the localization accuracy of the amorphous algorithm is higher than that of other algorithms and the energy consumption does not increase too much.

Keywords: Wireless Sensor Networks, Node Localization, Centroid Algorithm, DV–Hop Algorithm, Amorphous Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631
3487 A Comparison of Real Valued Transforms for Image Compression

Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori

Abstract:

In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.

Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1502
3486 PID Control Design Based on Genetic Algorithm with Integrator Anti-Windup for Automatic Voltage Regulator and Speed Governor of Brushless Synchronous Generator

Authors: O. S. Ebrahim, M. A. Badr, Kh. H. Gharib, H. K. Temraz

Abstract:

This paper presents a methodology based on genetic algorithm (GA) to tune the parameters of proportional-integral-differential (PID) controllers utilized in the automatic voltage regulator (AVR) and speed governor of a brushless synchronous generator driven by three-stage steam turbine. The parameter tuning is represented as a nonlinear optimization problem solved by GA to minimize the integral of absolute error (IAE). The problem of integral windup due to physical system limitations is solved using simple anti-windup scheme. The obtained controllers are compared to those designed using classical Ziegler-Nichols technique and constrained optimization. Results show distinct superiority of the proposed method.

Keywords: Brushless synchronous generator, Genetic Algorithm, GA, Proportional-Integral-Differential control, PID control, automatic voltage regulator, AVR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 296
3485 Dimensional Modeling of HIV Data Using Open Source

Authors: Charles D. Otine, Samuel B. Kucel, Lena Trojer

Abstract:

Selecting the data modeling technique for an information system is determined by the objective of the resultant data model. Dimensional modeling is the preferred modeling technique for data destined for data warehouses and data mining, presenting data models that ease analysis and queries which are in contrast with entity relationship modeling. The establishment of data warehouses as components of information system landscapes in many organizations has subsequently led to the development of dimensional modeling. This has been significantly more developed and reported for the commercial database management systems as compared to the open sources thereby making it less affordable for those in resource constrained settings. This paper presents dimensional modeling of HIV patient information using open source modeling tools. It aims to take advantage of the fact that the most affected regions by the HIV virus are also heavily resource constrained (sub-Saharan Africa) whereas having large quantities of HIV data. Two HIV data source systems were studied to identify appropriate dimensions and facts these were then modeled using two open source dimensional modeling tools. Use of open source would reduce the software costs for dimensional modeling and in turn make data warehousing and data mining more feasible even for those in resource constrained settings but with data available.

Keywords: About Database, Data Mining, Data warehouse, Dimensional Modeling, Open Source.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1959
3484 Finding Equilibrium in Transport Networks by Simulation and Investigation of Behaviors

Authors: Gábor Szűcs, Gyula Sallai

Abstract:

The goal of this paper is to find Wardrop equilibrium in transport networks at case of uncertainty situations, where the uncertainty comes from lack of information. We use simulation tool to find the equilibrium, which gives only approximate solution, but this is sufficient for large networks as well. In order to take the uncertainty into account we have developed an interval-based procedure for finding the paths with minimal cost using the Dempster-Shafer theory. Furthermore we have investigated the users- behaviors using game theory approach, because their path choices influence the costs of the other users- paths.

Keywords: Dempster-Shafer theory, S-O and U-Otransportation network, uncertainty of information, Wardropequilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
3483 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2854
3482 Design of a Pneumonia Ontology for Diagnosis Decision Support System

Authors: Sabrina Azzi, Michal Iglewski, Véronique Nabelsi

Abstract:

Diagnosis error problem is frequent and one of the most important safety problems today. One of the main objectives of our work is to propose an ontological representation that takes into account the diagnostic criteria in order to improve the diagnostic. We choose pneumonia disease since it is one of the frequent diseases affected by diagnosis errors and have harmful effects on patients. To achieve our aim, we use a semi-automated method to integrate diverse knowledge sources that include publically available pneumonia disease guidelines from international repositories, biomedical ontologies and electronic health records. We follow the principles of the Open Biomedical Ontologies (OBO) Foundry. The resulting ontology covers symptoms and signs, all the types of pneumonia, antecedents, pathogens, and diagnostic testing. The first evaluation results show that most of the terms are covered by the ontology. This work is still in progress and represents a first and major step toward a development of a diagnosis decision support system for pneumonia.

Keywords: Clinical decision support system, diagnostic errors, ontology, pneumonia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 882
3481 Digital Redesign of Interval Systems via Particle Swarm Optimization

Authors: Chen-Chien Hsu, Chun-Hui Gao

Abstract:

In this paper, a PSO-based approach is proposed to derive a digital controller for redesigned digital systems having an interval plant based on resemblance of the extremal gain/phase margins. By combining the interval plant and a controller as an interval system, extremal GM/PM associated with the loop transfer function can be obtained. The design problem is then formulated as an optimization problem of an aggregated error function revealing the deviation on the extremal GM/PM between the redesigned digital system and its continuous counterpart, and subsequently optimized by a proposed PSO to obtain an optimal set of parameters for the digital controller. Computer simulations have shown that frequency responses of the redesigned digital system having an interval plant bare a better resemblance to its continuous-time counter part by the incorporation of a PSO-derived digital controller in comparison to those obtained using existing open-loop discretization methods.

Keywords: Digital redesign, Extremal systems, Particle swarm optimization, Uncertain interval systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1275
3480 Exploring More Productive Ways of Working

Authors: Jenna Ruostela, Antti Lönnqvist

Abstract:

New ways of working- refers to non-traditional work practices, settings and locations with information and communication technologies (ICT) to supplement or replace traditional ways of working. It questions the contemporary work practices and settings still very much used in knowledge-intensive organizations today. In this study new ways of working is seen to consist of two elements: work environment (incl. physical, virtual and social) and work practices. This study aims to gather the scattered information together and deepen the understanding on new ways of working. Moreover, the objective is to provide some evidence of the unclear productivity impacts of new ways of working using case study approach.

Keywords: Knowledge work, new ways of working, productivity, work environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184
3479 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification

Authors: Samiah Alammari, Nassim Ammour

Abstract:

When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.

Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 234
3478 Identification of Optimum Parameters of Deep Drawing of a Cylindrical Workpiece using Neural Network and Genetic Algorithm

Authors: D. Singh, R. Yousefi, M. Boroushaki

Abstract:

Intelligent deep-drawing is an instrumental research field in sheet metal forming. A set of 28 different experimental data have been employed in this paper, investigating the roles of die radius, punch radius, friction coefficients and drawing ratios for axisymmetric workpieces deep drawing. This paper focuses an evolutionary neural network, specifically, error back propagation in collaboration with genetic algorithm. The neural network encompasses a number of different functional nodes defined through the established principles. The input parameters, i.e., punch radii, die radii, friction coefficients and drawing ratios are set to the network; thereafter, the material outputs at two critical points are accurately calculated. The output of the network is used to establish the best parameters leading to the most uniform thickness in the product via the genetic algorithm. This research achieved satisfactory results based on demonstration of neural networks.

Keywords: Deep-drawing, Neural network, Genetic algorithm, Sheet metal forming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2203
3477 Granger Causal Nexus between Financial Development and Energy Consumption: Evidence from Cross Country Panel Data

Authors: Rudra P. Pradhan

Abstract:

This paper examines the Granger causal nexus between financial development and energy consumption in the group of 35 Financial Action Task Force (FATF) Countries over the period 1988-2012. The study uses two financial development indicators such as private sector credit and stock market capitalization and seven energy consumption indicators such as coal, oil, gas, electricity, hydro-electrical, nuclear and biomass. Using panel cointegration tests, the study finds that financial development and energy consumption are cointegrated, indicating the presence of a long-run relationship between the two. Using a panel vector error correction model (VECM), the study detects both bidirectional and unidirectional causality between financial development and energy consumption. The variation of this causality is due to the use of different proxies for both financial development and energy consumption. The policy implication of this study is that economic policies should recognize the differences in the financial development-energy consumption nexus in order to maintain sustainable development in the selected 35 FATF countries.

Keywords: Financial development, energy consumption, Panel VECM, FATF countries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513
3476 Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise

Authors: Kamaldeep Joshi, Rajkumar Yadav, Sachin Allwadhi

Abstract:

Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.

Keywords: LSB, watermarking, salt and pepper, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
3475 A Comparison and Analysis of Name Matching Algorithms

Authors: Chakkrit Snae

Abstract:

Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.

Keywords: Data mining, name matching algorithm, nominaldata, searching system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11090
3474 Kinetic Modeling of Transesterification of Triacetin Using Synthesized Ion Exchange Resin (SIERs)

Authors: Hafizuddin W. Yussof, Syamsutajri S. Bahri, Adam P. Harvey

Abstract:

Strong anion exchange resins with QN+OH-, have the potential to be developed and employed as heterogeneous catalyst for transesterification, as they are chemically stable to leaching of the functional group. Nine different SIERs (SIER1-9) with QN+OH-were prepared by suspension polymerization of vinylbenzyl chloridedivinylbenzene (VBC-DVB) copolymers in the presence of n-heptane (pore-forming agent). The amine group was successfully grafted into the polymeric resin beads through functionalization with trimethylamine. These SIERs are then used as a catalyst for the transesterification of triacetin with methanol. A set of differential equations that represents the Langmuir-Hinshelwood-Hougen- Watson (LHHW) and Eley-Rideal (ER) models for the transesterification reaction were developed. These kinetic models of LHHW and ER were fitted to the experimental data. Overall, the synthesized ion exchange resin-catalyzed reaction were welldescribed by the Eley-Rideal model compared to LHHW models, with sum of square error (SSE) of 0.742 and 0.996, respectively.

Keywords: Anion exchange resin, Eley-Rideal, Langmuir-Hinshelwood-Hougen-Watson, transesterification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2392
3473 Electrical Impedance Imaging Using Eddy Current

Authors: A. Ambia, T. Takemae, Y. Kosugi, M. Hongo

Abstract:

Electric impedance imaging is a method of reconstructing spatial distribution of electrical conductivity inside a subject. In this paper, a new method of electrical impedance imaging using eddy current is proposed. The eddy current distribution in the body depends on the conductivity distribution and the magnetic field pattern. By changing the position of magnetic core, a set of voltage differences is measured with a pair of electrodes. This set of voltage differences is used in image reconstruction of conductivity distribution. The least square error minimization method is used as a reconstruction algorithm. The back projection algorithm is used to get two dimensional images. Based on this principle, a measurement system is developed and some model experiments were performed with a saline filled phantom. The shape of each model in the reconstructed image is similar to the corresponding model, respectively. From the results of these experiments, it is confirmed that the proposed method is applicable in the realization of electrical imaging.

Keywords: Back projection algorithm, electrical impedancetomography, eddy current, magnetic inductance tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
3472 Comparison of Pore Space Features by Thin Sections and X-Ray Microtomography

Authors: H. Alves, J. T. Assis, M. Geraldes, I. Lima, R. T. Lopes

Abstract:

Microtomographic images and thin section (TS) images were analyzed and compared against some parameters of geological interest such as porosity and its distribution along the samples. The results show that microtomography (CT) analysis, although limited by its resolution, have some interesting information about the distribution of porosity (homogeneous or not) and can also quantify the connected and non-connected pores, i.e., total porosity. TS have no limitations concerning resolution, but are limited by the experimental data available in regards to a few glass sheets for analysis and also can give only information about the connected pores, i.e., effective porosity. Those two methods have their own virtues and flaws but when paired together they are able to complement one another, making for a more reliable and complete analysis.

Keywords: Microtomography, petrographical microscopy, sediments, thin sections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
3471 Gain Tuning Fuzzy Controller for an Optical Disk Drive

Authors: Shiuh-Jer Huang, Ming-Tien Su

Abstract:

Since the driving speed and control accuracy of commercial optical disk are increasing significantly, it needs an efficient controller to monitor the track seeking and following operations of the servo system for achieving the desired data extracting response. The nonlinear behaviors of the actuator and servo system of the optical disk drive will influence the laser spot positioning. Here, the model-free fuzzy control scheme is employed to design the track seeking servo controller for a d.c. motor driving optical disk drive system. In addition, the sliding model control strategy is introduced into the fuzzy control structure to construct a 1-D adaptive fuzzy rule intelligent controller for simplifying the implementation problem and improving the control performance. The experimental results show that the steady state error of the track seeking by using this fuzzy controller can maintain within the track width (1.6 μm ). It can be used in the track seeking and track following servo control operations.

Keywords: Fuzzy control, gain tuning and optical disk drive.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
3470 Solution of Density Dependent Nonlinear Reaction-Diffusion Equation Using Differential Quadrature Method

Authors: Gülnihal Meral

Abstract:

In this study, the density dependent nonlinear reactiondiffusion equation, which arises in the insect dispersal models, is solved using the combined application of differential quadrature method(DQM) and implicit Euler method. The polynomial based DQM is used to discretize the spatial derivatives of the problem. The resulting time-dependent nonlinear system of ordinary differential equations(ODE-s) is solved by using implicit Euler method. The computations are carried out for a Cauchy problem defined by a onedimensional density dependent nonlinear reaction-diffusion equation which has an exact solution. The DQM solution is found to be in a very good agreement with the exact solution in terms of maximum absolute error. The DQM solution exhibits superior accuracy at large time levels tending to steady-state. Furthermore, using an implicit method in the solution procedure leads to stable solutions and larger time steps could be used.

Keywords: Density Dependent Nonlinear Reaction-Diffusion Equation, Differential Quadrature Method, Implicit Euler Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2275
3469 A Study of Islamic Stock Indices and Macroeconomic Variables

Authors: Mohammad Irfan

Abstract:

The purpose of this paper is to investigate the relationship among the key macroeconomic variables and Islamic stock market in India. This study is based on the time series data of financial years 2009-2015 to explore the consistency of relationship between macroeconomic variables and Shariah Indices. The ADF (Augmented Dickey–Fuller Test Statistic) and PP (Phillips–Perron Test Statistic) tests are employed to check stationarity of the data. The study depicts the long run relationship between Shariah indices and macroeconomic variables by using the Johansen Co-integration test. BSE Shariah and Nifty Shariah have uni-direct Granger causality. The outcome of VECM is significantly confirming the applicability of best fitted model. Thus, Islamic stock indices are proficiently working for the development of Indian economy. It suggests that by keeping eyes on Islamic stock market which will be more interactive in the future with other macroeconomic variables.

Keywords: Indian shariah indices, macroeconomic variables, co-integration, Granger causality, Vector error correction model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218
3468 Smart and Connected Aircraft Cabin: A Balancing Act between Operational Cabin Management, Airline Business and Passenger Expectations

Authors: Ralf God, Lothar Kerschgens, Leonardo Goratti, Steven Lemaire

Abstract:

Ubiquitous connectivity is a reality and a basic need for users on ground. Air travel connectivity in the cabin is also becoming increasingly important for passengers during cabin use. Wireless sensor networks that provide information to cabin management systems are being used by airlines to optimize cabin crew workload. In networked cabin systems, communications and digitally transmitted data must be managed by airlines in every direction. Security and privacy, information processing and knowledge management are the current and future requirements for a smart and connected cabin.

Keywords: Smart and connected cabin management, Internet of Things, power management, airline business.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 436
3467 Small Signal Stability Enhancement for Hybrid Power Systems by SVC

Authors: Ali Dehghani, Mojtaba Hakimzadeh, Amir Habibi, Navid Mehdizadeh Afroozi

Abstract:

In this paper an isolated wind-diesel hybrid power system has been considered for reactive power control study having an induction generator for wind power conversion and synchronous alternator with automatic voltage regulator (AVR) for diesel unit is presented. The dynamic voltage stability evaluation is dependent on small signal analysis considering a Static VAR Compensator (SVC) and IEEE type -I excitation system. It's shown that the variable reactive power source like SVC is crucial to meet the varying demand of reactive power by induction generator and load and to acquire an excellent voltage regulation of the system with minimum fluctuations. Integral square error (ISE) criterion can be used to evaluate the optimum setting of gain parameters. Finally the dynamic responses of the power systems considered with optimum gain setting will also be presented.

Keywords: SVC, Small Signal Stability, Reactive Power, Control, Hybrid System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458
3466 Low-complexity Integer Frequency Offset Synchronization for OFDMA System

Authors: Young-Jae Kim, Young-Hwan You

Abstract:

This paper presents a integer frequency offset (IFO) estimation scheme for the 3GPP long term evolution (LTE) downlink system. Firstly, the conventional joint detection method for IFO and sector cell index (CID) information is introduced. Secondly, an IFO estimation without explicit sector CID information is proposed, which can operate jointly with the proposed IFO estimation and reduce the time delay in comparison with the conventional joint method. Also, the proposed method is computationally efficient and has almost similar performance in comparison with the conventional method over the Pedestrian and Vehicular channel models.

Keywords: LTE, OFDMA, primary synchronization signal (PSS), IFO, CID

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2313
3465 A Detection Method of Faults in Railway Pantographs Based on Dynamic Phase Plots

Authors: G. Santamato, M. Solazzi, A. Frisoli

Abstract:

Systems for detection of damages in railway pantographs effectively reduce the cost of maintenance and improve time scheduling. In this paper, we present an approach to design a monitoring tool fitting strong customer requirements such as portability and ease of use. Pantograph has been modeled to estimate its dynamical properties, since no data are available. With the aim to focus on suspensions health, a two Degrees of Freedom (DOF) scheme has been adopted. Parameters have been calculated by means of analytical dynamics. A Finite Element Method (FEM) modal analysis verified the former model with an acceptable error. The detection strategy seeks phase-plots topology alteration, induced by defects. In order to test the suitability of the method, leakage in the dashpot was simulated on the lumped model. Results are interesting because changes in phase plots are more appreciable than frequency-shift. Further calculations as well as experimental tests will support future developments of this smart strategy.

Keywords: Pantograph models, phase-plots, structural health monitoring, vibration-based condition monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
3464 Predicting Extrusion Process Parameters Using Neural Networks

Authors: Sachin Man Bajimaya, SangChul Park, Gi-Nam Wang

Abstract:

The objective of this paper is to estimate realistic principal extrusion process parameters by means of artificial neural network. Conventionally, finite element analysis is used to derive process parameters. However, the finite element analysis of the extrusion model does not consider the manufacturing process constraints in its modeling. Therefore, the process parameters obtained through such an analysis remains highly theoretical. Alternatively, process development in industrial extrusion is to a great extent based on trial and error and often involves full-size experiments, which are both expensive and time-consuming. The artificial neural network-based estimation of the extrusion process parameters prior to plant execution helps to make the actual extrusion operation more efficient because more realistic parameters may be obtained. And so, it bridges the gap between simulation and real manufacturing execution system. In this work, a suitable neural network is designed which is trained using an appropriate learning algorithm. The network so trained is used to predict the manufacturing process parameters.

Keywords: Artificial Neural Network (ANN), Indirect Extrusion, Finite Element Analysis, MES.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368
3463 Improved Processing Speed for Text Watermarking Algorithm in Color Images

Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari

Abstract:

Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.

Keywords: Steganography, watermarking, private keys, time complexity measurements.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
3462 Personal Authentication Using FDOST in Finger Knuckle-Print Biometrics

Authors: N. B. Mahesh Kumar, K. Premalatha

Abstract:

The inherent skin patterns created at the joints in the finger exterior are referred as finger knuckle-print. It is exploited to identify a person in a unique manner because the finger knuckle print is greatly affluent in textures. In biometric system, the region of interest is utilized for the feature extraction algorithm. In this paper, local and global features are extracted separately. Fast Discrete Orthonormal Stockwell Transform is exploited to extract the local features. Global feature is attained by escalating the size of Fast Discrete Orthonormal Stockwell Transform to infinity. Two features are fused to increase the recognition accuracy. A matching distance is calculated for both the features individually. Then two distances are merged mutually to acquire the final matching distance. The proposed scheme gives the better performance in terms of equal error rate and correct recognition rate.

Keywords: Hamming distance, Instantaneous phase, Region of Interest, Recognition accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2759
3461 Comparison between Haar and Daubechies Wavelet Transformations on FPGA Technology

Authors: Fatma H. Elfouly, Mohamed I. Mahmoud, Moawad I. M. Dessouky, Salah Deyab

Abstract:

Recently, the Field Programmable Gate Array (FPGA) technology offers the potential of designing high performance systems at low cost. The discrete wavelet transform has gained the reputation of being a very effective signal analysis tool for many practical applications. However, due to its computation-intensive nature, current implementation of the transform falls short of meeting real-time processing requirements of most application. The objectives of this paper are implement the Haar and Daubechies wavelets using FPGA technology. In addition, the Bit Error Rate (BER) between the input audio signal and the reconstructed output signal for each wavelet is calculated. From the BER, it is seen that the implementations execute the operation of the wavelet transform correctly and satisfying the perfect reconstruction conditions. The design procedure has been explained and designed using the stat-ofart Electronic Design Automation (EDA) tools for system design on FPGA. Simulation, synthesis and implementation on the FPGA target technology has been carried out.

Keywords: Daubechies wavelet, discrete wavelet transform, Haar wavelet, Xilinx FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7231
3460 A New Method of Adaptation in Integrated Learning Environment

Authors: Ildar Galeev, Renat Mustaphin, C. Ardil

Abstract:

A new method of adaptation in a partially integrated learning environment that includes electronic textbook (ET) and integrated tutoring system (ITS) is described. The algorithm of adaptation is described in detail. It includes: establishment of Interconnections of operations and concepts; estimate of the concept mastering level (for all concepts); estimate of student-s non-mastering level on the current learning step of information on each page of ET; creation of a rank-order list of links to the e-manual pages containing information that require repeated work.

Keywords: Adaptation, Integrated Learning Environment, Integrated Tutoring System, Electronic Textbook.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1468
3459 Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study

Authors: Negar Riazifar, Mehran Yazdi

Abstract:

Discrete Wavelet Transform (DWT) has demonstrated far superior to previous Discrete Cosine Transform (DCT) and standard JPEG in natural as well as medical image compression. Due to its localization properties both in special and transform domain, the quantization error introduced in DWT does not propagate globally as in DCT. Moreover, DWT is a global approach that avoids block artifacts as in the JPEG. However, recent reports on natural image compression have shown the superior performance of contourlet transform, a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks, compared to DWT. It is mostly due to the optimality of contourlet in representing the edges when they are smooth curves. In this work, we investigate this fact for medical images, especially for CT images, which has not been reported yet. To do that, we propose a compression scheme in transform domain and compare the performance of both DWT and contourlet transform in PSNR for different compression ratios (CR) using this scheme. The results obtained using different type of computed tomography images show that the DWT has still good performance at lower CR but contourlet transform performs better at higher CR.

Keywords: Computed Tomography (CT), DWT, Discrete Contourlet Transform, Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2799