Search results for: precise time domain expanding algorithm.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9615

Search results for: precise time domain expanding algorithm.

9525 DHT-LMS Algorithm for Sensorineural Loss Patients

Authors: Sunitha S. L., V. Udayashankara

Abstract:

Hearing impairment is the number one chronic disability affecting many people in the world. Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Hartley Transform Power Normalized Least Mean Square algorithm (DHT-LMS) to improve the SNR and to reduce the convergence rate of the Least Means Square (LMS) for sensorineural loss patients. The DHT transforms n real numbers to n real numbers, and has the convenient property of being its own inverse. It can be effectively used for noise cancellation with less convergence time. The simulated result shows the superior characteristics by improving the SNR at least 9 dB for input SNR with zero dB and faster convergence rate (eigenvalue ratio 12) compare to time domain method and DFT-LMS.

Keywords: Hearing Impairment, DHT-LMS, Convergence rate, SNR improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
9524 Enhanced Shell Sorting Algorithm

Authors: Basit Shahzad, Muhammad Tanvir Afzal

Abstract:

Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.

Keywords: Algorithm, Computation, Shell, Sorting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3096
9523 Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection

Authors: Tesnim Charrad, Kaouther Nouira, Ahmed Ferchichi

Abstract:

In order to reduce the number of deaths due to heart problems, we propose the use of Hierarchical Temporal Memory Algorithm (HTM) which is a real time anomaly detection algorithm. HTM is a cortical learning algorithm based on neocortex used for anomaly detection. In other words, it is based on a conceptual theory of how the human brain can work. It is powerful in predicting unusual patterns, anomaly detection and classification. In this paper, HTM have been implemented and tested on ECG datasets in order to detect cardiac anomalies. Experiments showed good performance in terms of specificity, sensitivity and execution time.

Keywords: HTM, Real time anomaly detection, ECG, Cardiac Anomalies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
9522 Convergence Analysis of an Alternative Gradient Algorithm for Non-Negative Matrix Factorization

Authors: Chenxue Yang, Mao Ye, Zijian Liu, Tao Li, Jiao Bao

Abstract:

Non-negative matrix factorization (NMF) is a useful computational method to find basis information of multivariate nonnegative data. A popular approach to solve the NMF problem is the multiplicative update (MU) algorithm. But, it has some defects. So the columnwisely alternating gradient (cAG) algorithm was proposed. In this paper, we analyze convergence of the cAG algorithm and show advantages over the MU algorithm. The stability of the equilibrium point is used to prove the convergence of the cAG algorithm. A classic model is used to obtain the equilibrium point and the invariant sets are constructed to guarantee the integrity of the stability. Finally, the convergence conditions of the cAG algorithm are obtained, which help reducing the evaluation time and is confirmed in the experiments. By using the same method, the MU algorithm has zero divisor and is convergent at zero has been verified. In addition, the convergence conditions of the MU algorithm at zero are similar to that of the cAG algorithm at non-zero. However, it is meaningless to discuss the convergence at zero, which is not always the result that we want for NMF. Thus, we theoretically illustrate the advantages of the cAG algorithm.

Keywords: Non-negative matrix factorizations, convergence, cAG algorithm, equilibrium point, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
9521 Image Adaptive Watermarking with Visual Model in Orthogonal Polynomials based Transformation Domain

Authors: Krishnamoorthi R., Sheba Kezia Malarchelvi P. D.

Abstract:

In this paper, an image adaptive, invisible digital watermarking algorithm with Orthogonal Polynomials based Transformation (OPT) is proposed, for copyright protection of digital images. The proposed algorithm utilizes a visual model to determine the watermarking strength necessary to invisibly embed the watermark in the mid frequency AC coefficients of the cover image, chosen with a secret key. The visual model is designed to generate a Just Noticeable Distortion mask (JND) by analyzing the low level image characteristics such as textures, edges and luminance of the cover image in the orthogonal polynomials based transformation domain. Since the secret key is required for both embedding and extraction of watermark, it is not possible for an unauthorized user to extract the embedded watermark. The proposed scheme is robust to common image processing distortions like filtering, JPEG compression and additive noise. Experimental results show that the quality of OPT domain watermarked images is better than its DCT counterpart.

Keywords: Orthogonal Polynomials based Transformation, Digital Watermarking, Copyright Protection, Visual model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
9520 An Analysis of Genetic Algorithm Based Test Data Compression Using Modified PRL Coding

Authors: K. S. Neelukumari, K. B. Jayanthi

Abstract:

In this paper genetic based test data compression is targeted for improving the compression ratio and for reducing the computation time. The genetic algorithm is based on extended pattern run-length coding. The test set contains a large number of X value that can be effectively exploited to improve the test data compression. In this coding method, a reference pattern is set and its compatibility is checked. For this process, a genetic algorithm is proposed to reduce the computation time of encoding algorithm. This coding technique encodes the 2n compatible pattern or the inversely compatible pattern into a single test data segment or multiple test data segment. The experimental result shows that the compression ratio and computation time is reduced.

Keywords: Backtracking, test data compression (TDC), x-filling, x-propagating and genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
9519 Improved Simultaneous Performance in the Time Domain and in the Frequency Domain

Authors: Azeddine Ghodbane, David Bensoussan, Maher Hammami

Abstract:

In this study, we introduce an alternative adaptive architecture that enhances both time and frequency performance, helpfully mitigating the effects of disturbances from the input plant and external disturbances affecting the output. To facilitate superior performance in both the time and frequency domains, we have developed a user-friendly interactive design methods using the GeoGebra platform.

Keywords: Control theory, decentralized control, sensitivity theory, input-output stability theory, robust multivariable feedback control design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94
9518 A Watermarking Scheme for MP3 Audio Files

Authors: Dimitrios Koukopoulos, Yiannis Stamatiou

Abstract:

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Keywords: Audio watermarking, mpeg audio layer 3, hardinstance generation, NP-completeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
9517 Differentiation of Heart Rate Time Series from Electroencephalogram and Noise

Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, Paul Joseph K.

Abstract:

Analysis of heart rate variability (HRV) has become a popular non-invasive tool for assessing the activities of autonomic nervous system. Most of the methods were hired from techniques used for time series analysis. Currently used methods are time domain, frequency domain, geometrical and fractal methods. A new technique, which searches for pattern repeatability in a time series, is proposed for quantifying heart rate (HR) time series. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are able to distinguish HR data clearly from noise and electroencephalogram (EEG). The results of analysis using these measures give an insight into the fundamental difference between the composition of HR time series with respect to EEG and noise.

Keywords: Approximate entropy, heart rate variability, noise, pattern repeatability, and sample entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
9516 A Methodological Approach for Detecting Burst Noise in the Time Domain

Authors: Liu Dan, Wang Xue, Wang Guiqin, Qian Zhihong

Abstract:

The burst noise is a kind of noises that are destructive and frequently found in semiconductor devices and ICs, yet detecting and removing the noise has proved challenging for IC designers or users. According to the properties of burst noise, a methodological approach is presented (proposed) in the paper, by which the burst noise can be analysed and detected in time domain. In this paper, principles and properties of burst noise are expounded first, Afterwards, feasibility (viable) of burst noise detection by means of wavelet transform in the time domain is corroborated in the paper, and the multi-resolution characters of Gaussian noise, burst noise and blurred burst noise are discussed in details by computer emulation. Furthermore, the practical method to decide parameters of wavelet transform is acquired through a great deal of experiment and data statistics. The methodology may yield an expectation in a wide variety of applications.

Keywords: Burst noise, detection, wavelet transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1864
9515 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid

Authors: Sunitha. S.L., V. Udayashankara

Abstract:

Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.

Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
9514 Denoising and Compression in Wavelet Domainvia Projection on to Approximation Coefficients

Authors: Mario Mastriani

Abstract:

We describe a new filtering approach in the wavelet domain for image denoising and compression, based on the projections of details subbands coefficients (resultants of the splitting procedure, typical in wavelet domain) onto the approximation subband coefficients (much less noisy). The new algorithm is called Projection Onto Approximation Coefficients (POAC). As a result of this approach, only the approximation subband coefficients and three scalars are stored and/or transmitted to the channel. Besides, with the elimination of the details subbands coefficients, we obtain a bigger compression rate. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.

Keywords: Compression, denoising, projections, wavelets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
9513 An Efficient Watermarking Method for MP3 Audio Files

Authors: Dimitrios Koukopoulos, Yiannis Stamatiou

Abstract:

In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.

Keywords: Audio watermarking, mpeg audio layer 3, hard instance generation, NP-completeness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
9512 Real-time Detection of Space Manipulator Self-collision

Authors: Zhang Xiaodong, Tang Zixin, Liu Xin

Abstract:

In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder-enveloping surface, and then, a kind of detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.

Keywords: Space manipulator, Collision detection, Self-collision, the real-time collision detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
9511 Similarity Measure Functions for Strategy-Based Biometrics

Authors: Roman V. Yampolskiy, Venu Govindaraju

Abstract:

Functioning of a biometric system in large part depends on the performance of the similarity measure function. Frequently a generalized similarity distance measure function such as Euclidian distance or Mahalanobis distance is applied to the task of matching biometric feature vectors. However, often accuracy of a biometric system can be greatly improved by designing a customized matching algorithm optimized for a particular biometric application. In this paper we propose a tailored similarity measure function for behavioral biometric systems based on the expert knowledge of the feature level data in the domain. We compare performance of a proposed matching algorithm to that of other well known similarity distance functions and demonstrate its superiority with respect to the chosen domain.

Keywords: Behavioral Biometrics, Euclidian Distance, Matching, Similarity Measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610
9510 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
9509 A Genetic Algorithm with Priority Selection for the Traveling Salesman Problem

Authors: Cha-Hwa Lin, Je-Wei Hu

Abstract:

The conventional GA combined with a local search algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA) for the traveling salesman problem (TSP). However, the geometric properties which are problem specific knowledge can be used to improve the search process of the HGA. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. This knowledge could constrain GAs to work out with fine tour segments without considering long tour segments as often. Consequently, a new algorithm is proposed, called intelligent-OPT hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT algorithm in order to reduce the search time for the optimal solution. Based on the geometric properties, all the tour segments are assigned 2-level priorities to distinguish between good and bad genes. A simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that in general the IOHGA could obtain near-optimal solutions with less time and better accuracy than the hybrid genetic algorithm with simulated annealing algorithm (HGA(SA)).

Keywords: Traveling salesman problem, hybrid geneticalgorithm, priority selection, 2-OPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
9508 Comparing Hilditch, Rosenfeld, Zhang-Suen,and Nagendraprasad -Wang-Gupta Thinning

Authors: Anastasia Rita Widiarti

Abstract:

This paper compares Hilditch, Rosenfeld, Zhang- Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms for Javanese character image recognition. Thinning is an effective process when the focus in not on the size of the pattern, but rather on the relative position of the strokes in the pattern. The research analyzes the thinning of 60 Javanese characters. Time-wise, Zhang-Suen algorithm gives the best results with the average process time being 0.00455188 seconds. But if we look at the percentage of pixels that meet one-pixel thickness, Rosenfelt algorithm gives the best results, with a 99.98% success rate. From the number of pixels that are erased, NWG algorithm gives the best results with the average number of pixels erased being 84.12%. It can be concluded that the Hilditch algorithm performs least successfully compared to the other three algorithms.

Keywords: Hilditch algorithm, Nagendraprasad-Wang-Guptaalgorithm, Rosenfeld algorithm, Thinning, Zhang-suen algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3848
9507 Fast Complex Valued Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Qiangfu Zhao

Abstract:

Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Complex Valued Time Delay Neural Networks, Cross Correlation, Frequency Domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
9506 A Distributed Group Mutual Exclusion Algorithm for Soft Real Time Systems

Authors: Abhishek Swaroop, Awadhesh Kumar Singh

Abstract:

The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. Several solutions of the GME problem have been proposed for message passing distributed systems. However, none of these solutions is suitable for real time distributed systems. In this paper, we propose a token-based distributed algorithms for the GME problem in soft real time distributed systems. The algorithm uses the concepts of priority queue, dynamic request set and the process state. The algorithm uses first come first serve approach in selecting the next session type between the same priority levels and satisfies the concurrent occupancy property. The algorithm allows all n processors to be inside their CS provided they request for the same session. The performance analysis and correctness proof of the algorithm has also been included in the paper.

Keywords: Concurrency, Group mutual exclusion, Priority, Request set, Token.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
9505 SDVAR Algorithm for Detecting Fraud in Telecommunications

Authors: Fatimah Almah Saaid, Darfiana Nur, Robert King

Abstract:

This paper presents a procedure for estimating VAR using Sequential Discounting VAR (SDVAR) algorithm for online model learning to detect fraudulent acts using the telecommunications call detailed records (CDR). The volatility of the VAR is observed allowing for non-linearity, outliers and change points based on the works of [1]. This paper extends their procedure from univariate to multivariate time series. A simulation and a case study for detecting telecommunications fraud using CDR illustrate the use of the algorithm in the bivariate setting.

Keywords: Telecommunications Fraud, SDVAR Algorithm, Multivariate time series, Vector Autoregressive, Change points.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
9504 An Efficient Multi Join Algorithm Utilizing a Lattice of Double Indices

Authors: Hanan A. M. Abd Alla, Lilac A. E. Al-Safadi

Abstract:

In this paper, a novel multi join algorithm to join multiple relations will be introduced. The novel algorithm is based on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.

Keywords: Multi join, Relation, Lattice, Join indices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1251
9503 Efficient Spectral Analysis of Quasi Stationary Time Series

Authors: Khalid M. Aamir, Mohammad A. Maud

Abstract:

Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.

Keywords: Power Spectral Density (PSD), quasi-stationarytime series, short time Fourier Transform, Sliding window DFT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
9502 The Lubrication Regimes Recognition of a Pressure-Fed Journal Bearing by Time and Frequency Domain Analysis of Acoustic Emission Signals

Authors: S. Hosseini, M. Ahmadi Najafabadi, M. Akhlaghi

Abstract:

The health of the journal bearings is very important in preventing unforeseen breakdowns in rotary machines, and poor lubrication is one of the most important factors for producing the bearing failures. Hydrodynamic lubrication (HL), mixed lubrication (ML), and boundary lubrication (BL) are three regimes of a journal bearing lubrication. This paper uses acoustic emission (AE) measurement technique to correlate features of the AE signals to the three lubrication regimes. The transitions from HL to ML based on operating factors such as rotating speed, load, inlet oil pressure by time domain and time-frequency domain signal analysis techniques are detected, and then metal-to-metal contacts between sliding surfaces of the journal and bearing are identified. It is found that there is a significant difference between theoretical and experimental operating values that are obtained for defining the lubrication regions.

Keywords: Acoustic emission technique, pressure fed journal bearing, time and frequency signal analysis, metal-to-metal contact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
9501 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory

Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov

Abstract:

The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.

Keywords: Arbitrary cross section waveguide, analytical regularization method, evolutionary equations of electromagnetic theory of time-domain, TM field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
9500 Improving the Performance of Back-Propagation Training Algorithm by Using ANN

Authors: Vishnu Pratap Singh Kirar

Abstract:

Artificial Neural Network (ANN) can be trained using back propagation (BP). It is the most widely used algorithm for supervised learning with multi-layered feed-forward networks. Efficient learning by the BP algorithm is required for many practical applications. The BP algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a twoterm algorithm consisting of a learning rate (LR) and a momentum factor (MF). The major drawbacks of the two-term BP learning algorithm are the problems of local minima and slow convergence speeds, which limit the scope for real-time applications. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and criteria for evaluating convergence are required to facilitate the application of the three terms BP algorithm. Although these two seem to be closely related, as described later, we summarize various improvements to overcome the drawbacks. Here we compare the different methods of convergence of the new three-term BP algorithm.

Keywords: Neural Network, Backpropagation, Local Minima, Fast Convergence Rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3503
9499 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We present a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: Lexicon, sentiment analysis, stock movement prediction., computational finance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
9498 Dynamic Response of Nano Spherical Shell Subjected to Termo-Mechanical Shock Using Nonlocal Elasticity Theory

Authors: J. Ranjbarn, A. Alibeigloo

Abstract:

In this paper, we present an analytical method for analysis of nano-scale spherical shell subjected to thermo-mechanical shocks based on nonlocal elasticity theory. Thermo-mechanical properties of nano shpere is assumed to be temperature dependent. Governing partial differential equation of motion is solved analytically by using Laplace transform for time domain and power series for spacial domain. The results in Laplace domain is transferred to time domain by employing the fast inverse Laplace transform (FLIT) method. Accuracy of present approach is assessed by comparing the the numerical results with the results of published work in literature. Furtheremore, the effects of non-local parameter and wall thickness on the dynamic characteristics of the nano-sphere are studied.

Keywords: Nano-scale spherical shell, nonlocal elasticity theory, thermomechanical shock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
9497 A Hybrid Algorithm for Collaborative Transportation Planning among Carriers

Authors: Elham Jelodari Mamaghani, Christian Prins, Haoxun Chen

Abstract:

In this paper, there is concentration on collaborative transportation planning (CTP) among multiple carriers with pickup and delivery requests and time windows. This problem is a vehicle routing problem with constraints from standard vehicle routing problems and new constraints from a real-world application. In the problem, each carrier has a finite number of vehicles, and each request is a pickup and delivery request with time window. Moreover, each carrier has reserved requests, which must be served by itself, whereas its exchangeable requests can be outsourced to and served by other carriers. This collaboration among carriers can help them to reduce total transportation costs. A mixed integer programming model is proposed to the problem. To solve the model, a hybrid algorithm that combines Genetic Algorithm and Simulated Annealing (GASA) is proposed. This algorithm takes advantages of GASA at the same time. After tuning the parameters of the algorithm with the Taguchi method, the experiments are conducted and experimental results are provided for the hybrid algorithm. The results are compared with those obtained by a commercial solver. The comparison indicates that the GASA significantly outperforms the commercial solver.

Keywords: Centralized collaborative transportation, collaborative transportation with pickup and delivery, collaborative transportation with time windows, hybrid algorithm of GA and SA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
9496 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We introduce a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: Computational finance, sentiment analysis, sentiment lexicon, stock movement prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070