Search results for: precise time domain expanding algorithm.
9594 Coding based Synchronization Algorithm for Secondary Synchronization Channel in WCDMA
Authors: Deng Liao, Dongyu Qiu, Ahmed K. Elhakeem
Abstract:
A new code synchronization algorithm is proposed in this paper for the secondary cell-search stage in wideband CDMA systems. Rather than using the Cyclically Permutable (CP) code in the Secondary Synchronization Channel (S-SCH) to simultaneously determine the frame boundary and scrambling code group, the new synchronization algorithm implements the same function with less system complexity and less Mean Acquisition Time (MAT). The Secondary Synchronization Code (SSC) is redesigned by splitting into two sub-sequences. We treat the information of scrambling code group as data bits and use simple time diversity BCH coding for further reliability. It avoids involved and time-costly Reed-Solomon (RS) code computations and comparisons. Analysis and simulation results show that the Synchronization Error Rate (SER) yielded by the new algorithm in Rayleigh fading channels is close to that of the conventional algorithm in the standard. This new synchronization algorithm reduces system complexities, shortens the average cell-search time and can be implemented in the slot-based cell-search pipeline. By taking antenna diversity and pipelining correlation processes, the new algorithm also shows its flexible application in multiple antenna systems.Keywords: WCDMA cell-search, synchronization algorithm, secondary synchronization channel, antenna diversity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23929593 Enhanced Shell Sorting Algorithm
Authors: Basit Shahzad, Muhammad Tanvir Afzal
Abstract:
Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.Keywords: Algorithm, Computation, Shell, Sorting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31369592 Lower Bound of Time Span Product for a General Class of Signals in Fractional Fourier Domain
Authors: Sukrit Shankar, Chetana Shanta Patsa, Jaydev Sharma
Abstract:
Fractional Fourier Transform is a generalization of the classical Fourier Transform which is often symbolized as the rotation in time- frequency plane. Similar to the product of time and frequency span which provides the Uncertainty Principle for the classical Fourier domain, there has not been till date an Uncertainty Principle for the Fractional Fourier domain for a generalized class of finite energy signals. Though the lower bound for the product of time and Fractional Fourier span is derived for the real signals, a tighter lower bound for a general class of signals is of practical importance, especially for the analysis of signals containing chirps. We hence formulate a mathematical derivation that gives the lower bound of time and Fractional Fourier span product. The relation proves to be utmost importance in taking the Fractional Fourier Transform with adaptive time and Fractional span resolutions for a varied class of complex signals.
Keywords: Fractional Fourier Transform, uncertainty principle, Fractional Fourier Span, amplitude, phase.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11949591 On Constructing Approximate Convex Hull
Authors: M. Zahid Hossain, M. Ashraful Amin
Abstract:
The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O(n + k) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.
Keywords: Convex hull, Approximation algorithm, Computational geometry, Linear time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23009590 Denoising and Compression in Wavelet Domainvia Projection on to Approximation Coefficients
Authors: Mario Mastriani
Abstract:
We describe a new filtering approach in the wavelet domain for image denoising and compression, based on the projections of details subbands coefficients (resultants of the splitting procedure, typical in wavelet domain) onto the approximation subband coefficients (much less noisy). The new algorithm is called Projection Onto Approximation Coefficients (POAC). As a result of this approach, only the approximation subband coefficients and three scalars are stored and/or transmitted to the channel. Besides, with the elimination of the details subbands coefficients, we obtain a bigger compression rate. Experimental results demonstrate that our approach compares favorably to more typical methods of denoising and compression in wavelet domain.
Keywords: Compression, denoising, projections, wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16169589 Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection
Authors: Tesnim Charrad, Kaouther Nouira, Ahmed Ferchichi
Abstract:
In order to reduce the number of deaths due to heart problems, we propose the use of Hierarchical Temporal Memory Algorithm (HTM) which is a real time anomaly detection algorithm. HTM is a cortical learning algorithm based on neocortex used for anomaly detection. In other words, it is based on a conceptual theory of how the human brain can work. It is powerful in predicting unusual patterns, anomaly detection and classification. In this paper, HTM have been implemented and tested on ECG datasets in order to detect cardiac anomalies. Experiments showed good performance in terms of specificity, sensitivity and execution time.Keywords: HTM, Real time anomaly detection, ECG, Cardiac Anomalies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7959588 Similarity Measure Functions for Strategy-Based Biometrics
Authors: Roman V. Yampolskiy, Venu Govindaraju
Abstract:
Functioning of a biometric system in large part depends on the performance of the similarity measure function. Frequently a generalized similarity distance measure function such as Euclidian distance or Mahalanobis distance is applied to the task of matching biometric feature vectors. However, often accuracy of a biometric system can be greatly improved by designing a customized matching algorithm optimized for a particular biometric application. In this paper we propose a tailored similarity measure function for behavioral biometric systems based on the expert knowledge of the feature level data in the domain. We compare performance of a proposed matching algorithm to that of other well known similarity distance functions and demonstrate its superiority with respect to the chosen domain.Keywords: Behavioral Biometrics, Euclidian Distance, Matching, Similarity Measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16529587 Improved Simultaneous Performance in the Time Domain and in the Frequency Domain
Authors: Azeddine Ghodbane, David Bensoussan, Maher Hammami
Abstract:
In this study, we introduce an alternative adaptive architecture that enhances both time and frequency performance, helpfully mitigating the effects of disturbances from the input plant and external disturbances affecting the output. To facilitate superior performance in both the time and frequency domains, we have developed a user-friendly interactive design methods using the GeoGebra platform.
Keywords: Control theory, decentralized control, sensitivity theory, input-output stability theory, robust multivariable feedback control design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129586 A Watermarking Scheme for MP3 Audio Files
Authors: Dimitrios Koukopoulos, Yiannis Stamatiou
Abstract:
In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.Keywords: Audio watermarking, mpeg audio layer 3, hardinstance generation, NP-completeness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16529585 An Analysis of Genetic Algorithm Based Test Data Compression Using Modified PRL Coding
Authors: K. S. Neelukumari, K. B. Jayanthi
Abstract:
In this paper genetic based test data compression is targeted for improving the compression ratio and for reducing the computation time. The genetic algorithm is based on extended pattern run-length coding. The test set contains a large number of X value that can be effectively exploited to improve the test data compression. In this coding method, a reference pattern is set and its compatibility is checked. For this process, a genetic algorithm is proposed to reduce the computation time of encoding algorithm. This coding technique encodes the 2n compatible pattern or the inversely compatible pattern into a single test data segment or multiple test data segment. The experimental result shows that the compression ratio and computation time is reduced.Keywords: Backtracking, test data compression (TDC), x-filling, x-propagating and genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18709584 A Methodological Approach for Detecting Burst Noise in the Time Domain
Authors: Liu Dan, Wang Xue, Wang Guiqin, Qian Zhihong
Abstract:
The burst noise is a kind of noises that are destructive and frequently found in semiconductor devices and ICs, yet detecting and removing the noise has proved challenging for IC designers or users. According to the properties of burst noise, a methodological approach is presented (proposed) in the paper, by which the burst noise can be analysed and detected in time domain. In this paper, principles and properties of burst noise are expounded first, Afterwards, feasibility (viable) of burst noise detection by means of wavelet transform in the time domain is corroborated in the paper, and the multi-resolution characters of Gaussian noise, burst noise and blurred burst noise are discussed in details by computer emulation. Furthermore, the practical method to decide parameters of wavelet transform is acquired through a great deal of experiment and data statistics. The methodology may yield an expectation in a wide variety of applications.Keywords: Burst noise, detection, wavelet transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19139583 Fast Factored DCT-LMS Speech Enhancement for Performance Enhancement of Digital Hearing Aid
Authors: Sunitha. S.L., V. Udayashankara
Abstract:
Background noise is particularly damaging to speech intelligibility for people with hearing loss especially for sensorineural loss patients. Several investigations on speech intelligibility have demonstrated sensorineural loss patients need 5-15 dB higher SNR than the normal hearing subjects. This paper describes Discrete Cosine Transform Power Normalized Least Mean Square algorithm to improve the SNR and to reduce the convergence rate of the LMS for Sensory neural loss patients. Since it requires only real arithmetic, it establishes the faster convergence rate as compare to time domain LMS and also this transformation improves the eigenvalue distribution of the input autocorrelation matrix of the LMS filter. The DCT has good ortho-normal, separable, and energy compaction property. Although the DCT does not separate frequencies, it is a powerful signal decorrelator. It is a real valued function and thus can be effectively used in real-time operation. The advantages of DCT-LMS as compared to standard LMS algorithm are shown via SNR and eigenvalue ratio computations. . Exploiting the symmetry of the basis functions, the DCT transform matrix [AN] can be factored into a series of ±1 butterflies and rotation angles. This factorization results in one of the fastest DCT implementation. There are different ways to obtain factorizations. This work uses the fast factored DCT algorithm developed by Chen and company. The computer simulations results show superior convergence characteristics of the proposed algorithm by improving the SNR at least 10 dB for input SNR less than and equal to 0 dB, faster convergence speed and better time and frequency characteristics.Keywords: Hearing Impairment, DCT Adaptive filter, Sensorineural loss patients, Convergence rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21719582 Differentiation of Heart Rate Time Series from Electroencephalogram and Noise
Authors: V. I. Thajudin Ahamed, P. Dhanasekaran, Paul Joseph K.
Abstract:
Analysis of heart rate variability (HRV) has become a popular non-invasive tool for assessing the activities of autonomic nervous system. Most of the methods were hired from techniques used for time series analysis. Currently used methods are time domain, frequency domain, geometrical and fractal methods. A new technique, which searches for pattern repeatability in a time series, is proposed for quantifying heart rate (HR) time series. These set of indices, which are termed as pattern repeatability measure and pattern repeatability ratio are able to distinguish HR data clearly from noise and electroencephalogram (EEG). The results of analysis using these measures give an insight into the fundamental difference between the composition of HR time series with respect to EEG and noise.Keywords: Approximate entropy, heart rate variability, noise, pattern repeatability, and sample entropy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17349581 An Efficient Watermarking Method for MP3 Audio Files
Authors: Dimitrios Koukopoulos, Yiannis Stamatiou
Abstract:
In this work, we present for the first time in our perception an efficient digital watermarking scheme for mpeg audio layer 3 files that operates directly in the compressed data domain, while manipulating the time and subband/channel domain. In addition, it does not need the original signal to detect the watermark. Our scheme was implemented taking special care for the efficient usage of the two limited resources of computer systems: time and space. It offers to the industrial user the capability of watermark embedding and detection in time immediately comparable to the real music time of the original audio file that depends on the mpeg compression, while the end user/audience does not face any artifacts or delays hearing the watermarked audio file. Furthermore, it overcomes the disadvantage of algorithms operating in the PCMData domain to be vulnerable to compression/recompression attacks, as it places the watermark in the scale factors domain and not in the digitized sound audio data. The strength of our scheme, that allows it to be used with success in both authentication and copyright protection, relies on the fact that it gives to the users the enhanced capability their ownership of the audio file not to be accomplished simply by detecting the bit pattern that comprises the watermark itself, but by showing that the legal owner knows a hard to compute property of the watermark.
Keywords: Audio watermarking, mpeg audio layer 3, hard instance generation, NP-completeness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18349580 Lexicon-Based Sentiment Analysis for Stock Movement Prediction
Authors: Zane Turner, Kevin Labille, Susan Gauch
Abstract:
Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We present a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.
Keywords: Lexicon, sentiment analysis, stock movement prediction., computational finance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7809579 Lexicon-Based Sentiment Analysis for Stock Movement Prediction
Authors: Zane Turner, Kevin Labille, Susan Gauch
Abstract:
Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We introduce a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.
Keywords: Computational finance, sentiment analysis, sentiment lexicon, stock movement prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11379578 A Genetic Algorithm with Priority Selection for the Traveling Salesman Problem
Authors: Cha-Hwa Lin, Je-Wei Hu
Abstract:
The conventional GA combined with a local search algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA) for the traveling salesman problem (TSP). However, the geometric properties which are problem specific knowledge can be used to improve the search process of the HGA. Some tour segments (edges) of TSPs are fine while some maybe too long to appear in a short tour. This knowledge could constrain GAs to work out with fine tour segments without considering long tour segments as often. Consequently, a new algorithm is proposed, called intelligent-OPT hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT algorithm in order to reduce the search time for the optimal solution. Based on the geometric properties, all the tour segments are assigned 2-level priorities to distinguish between good and bad genes. A simulation study was conducted to evaluate the performance of the IOHGA. The experimental results indicate that in general the IOHGA could obtain near-optimal solutions with less time and better accuracy than the hybrid genetic algorithm with simulated annealing algorithm (HGA(SA)).Keywords: Traveling salesman problem, hybrid geneticalgorithm, priority selection, 2-OPT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15609577 Real-time Detection of Space Manipulator Self-collision
Authors: Zhang Xiaodong, Tang Zixin, Liu Xin
Abstract:
In order to avoid self-collision of space manipulators during operation process, a real-time detection method is proposed in this paper. The manipulator is fitted into a cylinder-enveloping surface, and then, a kind of detection algorithm of collision between cylinders is analyzed. The collision model of space manipulator self-links can be detected by using this algorithm in real-time detection during the operation process. To ensure security of the operation, a safety threshold is designed. The simulation and experiment results verify the effectiveness of the proposed algorithm for a 7-DOF space manipulator.Keywords: Space manipulator, Collision detection, Self-collision, the real-time collision detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20359576 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.
Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10129575 Comparing Hilditch, Rosenfeld, Zhang-Suen,and Nagendraprasad -Wang-Gupta Thinning
Authors: Anastasia Rita Widiarti
Abstract:
This paper compares Hilditch, Rosenfeld, Zhang- Suen, dan Nagendraprasad Wang Gupta (NWG) thinning algorithms for Javanese character image recognition. Thinning is an effective process when the focus in not on the size of the pattern, but rather on the relative position of the strokes in the pattern. The research analyzes the thinning of 60 Javanese characters. Time-wise, Zhang-Suen algorithm gives the best results with the average process time being 0.00455188 seconds. But if we look at the percentage of pixels that meet one-pixel thickness, Rosenfelt algorithm gives the best results, with a 99.98% success rate. From the number of pixels that are erased, NWG algorithm gives the best results with the average number of pixels erased being 84.12%. It can be concluded that the Hilditch algorithm performs least successfully compared to the other three algorithms.Keywords: Hilditch algorithm, Nagendraprasad-Wang-Guptaalgorithm, Rosenfeld algorithm, Thinning, Zhang-suen algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39209574 A Distributed Group Mutual Exclusion Algorithm for Soft Real Time Systems
Authors: Abhishek Swaroop, Awadhesh Kumar Singh
Abstract:
The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. Several solutions of the GME problem have been proposed for message passing distributed systems. However, none of these solutions is suitable for real time distributed systems. In this paper, we propose a token-based distributed algorithms for the GME problem in soft real time distributed systems. The algorithm uses the concepts of priority queue, dynamic request set and the process state. The algorithm uses first come first serve approach in selecting the next session type between the same priority levels and satisfies the concurrent occupancy property. The algorithm allows all n processors to be inside their CS provided they request for the same session. The performance analysis and correctness proof of the algorithm has also been included in the paper.Keywords: Concurrency, Group mutual exclusion, Priority, Request set, Token.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17139573 Fast Complex Valued Time Delay Neural Networks
Authors: Hazem M. El-Bakry, Qiangfu Zhao
Abstract:
Here, a new idea to speed up the operation of complex valued time delay neural networks is presented. The whole data are collected together in a long vector and then tested as a one input pattern. The proposed fast complex valued time delay neural networks uses cross correlation in the frequency domain between the tested data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented fast complex valued time delay neural networks is less than that needed by classical time delay neural networks. Simulation results using MATLAB confirm the theoretical computations.Keywords: Fast Complex Valued Time Delay Neural Networks, Cross Correlation, Frequency Domain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18259572 SDVAR Algorithm for Detecting Fraud in Telecommunications
Authors: Fatimah Almah Saaid, Darfiana Nur, Robert King
Abstract:
This paper presents a procedure for estimating VAR using Sequential Discounting VAR (SDVAR) algorithm for online model learning to detect fraudulent acts using the telecommunications call detailed records (CDR). The volatility of the VAR is observed allowing for non-linearity, outliers and change points based on the works of [1]. This paper extends their procedure from univariate to multivariate time series. A simulation and a case study for detecting telecommunications fraud using CDR illustrate the use of the algorithm in the bivariate setting.Keywords: Telecommunications Fraud, SDVAR Algorithm, Multivariate time series, Vector Autoregressive, Change points.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22569571 Improving the Performance of Back-Propagation Training Algorithm by Using ANN
Authors: Vishnu Pratap Singh Kirar
Abstract:
Artificial Neural Network (ANN) can be trained using back propagation (BP). It is the most widely used algorithm for supervised learning with multi-layered feed-forward networks. Efficient learning by the BP algorithm is required for many practical applications. The BP algorithm calculates the weight changes of artificial neural networks, and a common approach is to use a twoterm algorithm consisting of a learning rate (LR) and a momentum factor (MF). The major drawbacks of the two-term BP learning algorithm are the problems of local minima and slow convergence speeds, which limit the scope for real-time applications. Recently the addition of an extra term, called a proportional factor (PF), to the two-term BP algorithm was proposed. The third increases the speed of the BP algorithm. However, the PF term also reduces the convergence of the BP algorithm, and criteria for evaluating convergence are required to facilitate the application of the three terms BP algorithm. Although these two seem to be closely related, as described later, we summarize various improvements to overcome the drawbacks. Here we compare the different methods of convergence of the new three-term BP algorithm.
Keywords: Neural Network, Backpropagation, Local Minima, Fast Convergence Rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35619570 Analysis and Simulation of TM Fields in Waveguides with Arbitrary Cross-Section Shapes by Means of Evolutionary Equations of Time-Domain Electromagnetic Theory
Authors: Ömer Aktaş, Olga A. Suvorova, Oleg Tretyakov
Abstract:
The boundary value problem on non-canonical and arbitrary shaped contour is solved with a numerically effective method called Analytical Regularization Method (ARM) to calculate propagation parameters. As a result of regularization, the equation of first kind is reduced to the infinite system of the linear algebraic equations of the second kind in the space of L2. This equation can be solved numerically for desired accuracy by using truncation method. The parameters as cut-off wavenumber and cut-off frequency are used in waveguide evolutionary equations of electromagnetic theory in time-domain to illustrate the real-valued TM fields with lossy and lossless media.Keywords: Arbitrary cross section waveguide, analytical regularization method, evolutionary equations of electromagnetic theory of time-domain, TM field.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16749569 An Efficient Multi Join Algorithm Utilizing a Lattice of Double Indices
Authors: Hanan A. M. Abd Alla, Lilac A. E. Al-Safadi
Abstract:
In this paper, a novel multi join algorithm to join multiple relations will be introduced. The novel algorithm is based on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.Keywords: Multi join, Relation, Lattice, Join indices.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13009568 Dynamic Response of Nano Spherical Shell Subjected to Termo-Mechanical Shock Using Nonlocal Elasticity Theory
Authors: J. Ranjbarn, A. Alibeigloo
Abstract:
In this paper, we present an analytical method for analysis of nano-scale spherical shell subjected to thermo-mechanical shocks based on nonlocal elasticity theory. Thermo-mechanical properties of nano shpere is assumed to be temperature dependent. Governing partial differential equation of motion is solved analytically by using Laplace transform for time domain and power series for spacial domain. The results in Laplace domain is transferred to time domain by employing the fast inverse Laplace transform (FLIT) method. Accuracy of present approach is assessed by comparing the the numerical results with the results of published work in literature. Furtheremore, the effects of non-local parameter and wall thickness on the dynamic characteristics of the nano-sphere are studied.Keywords: Nano-scale spherical shell, nonlocal elasticity theory, thermomechanical shock.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14479567 The Lubrication Regimes Recognition of a Pressure-Fed Journal Bearing by Time and Frequency Domain Analysis of Acoustic Emission Signals
Authors: S. Hosseini, M. Ahmadi Najafabadi, M. Akhlaghi
Abstract:
The health of the journal bearings is very important in preventing unforeseen breakdowns in rotary machines, and poor lubrication is one of the most important factors for producing the bearing failures. Hydrodynamic lubrication (HL), mixed lubrication (ML), and boundary lubrication (BL) are three regimes of a journal bearing lubrication. This paper uses acoustic emission (AE) measurement technique to correlate features of the AE signals to the three lubrication regimes. The transitions from HL to ML based on operating factors such as rotating speed, load, inlet oil pressure by time domain and time-frequency domain signal analysis techniques are detected, and then metal-to-metal contacts between sliding surfaces of the journal and bearing are identified. It is found that there is a significant difference between theoretical and experimental operating values that are obtained for defining the lubrication regions.Keywords: Acoustic emission technique, pressure fed journal bearing, time and frequency signal analysis, metal-to-metal contact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8129566 Efficient Spectral Analysis of Quasi Stationary Time Series
Authors: Khalid M. Aamir, Mohammad A. Maud
Abstract:
Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.
Keywords: Power Spectral Density (PSD), quasi-stationarytime series, short time Fourier Transform, Sliding window DFT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19699565 Data Hiding in Images in Discrete Wavelet Domain Using PMM
Authors: Souvik Bhattacharyya, Gautam Sanyal
Abstract:
Over last two decades, due to hostilities of environment over the internet the concerns about confidentiality of information have increased at phenomenal rate. Therefore to safeguard the information from attacks, number of data/information hiding methods have evolved mostly in spatial and transformation domain.In spatial domain data hiding techniques,the information is embedded directly on the image plane itself. In transform domain data hiding techniques the image is first changed from spatial domain to some other domain and then the secret information is embedded so that the secret information remains more secure from any attack. Information hiding algorithms in time domain or spatial domain have high capacity and relatively lower robustness. In contrast, the algorithms in transform domain, such as DCT, DWT have certain robustness against some multimedia processing.In this work the authors propose a novel steganographic method for hiding information in the transform domain of the gray scale image.The proposed approach works by converting the gray level image in transform domain using discrete integer wavelet technique through lifting scheme.This approach performs a 2-D lifting wavelet decomposition through Haar lifted wavelet of the cover image and computes the approximation coefficients matrix CA and detail coefficients matrices CH, CV, and CD.Next step is to apply the PMM technique in those coefficients to form the stego image. The aim of this paper is to propose a high-capacity image steganography technique that uses pixel mapping method in integer wavelet domain with acceptable levels of imperceptibility and distortion in the cover image and high level of overall security. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.Keywords: Cover Image, Pixel Mapping Method (PMM), StegoImage, Integer Wavelet Tranform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2854