Search results for: Vector Quantization .
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 791

Search results for: Vector Quantization .

401 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
400 Improved Feature Processing for Iris Biometric Authentication System

Authors: Somnath Dey, Debasis Samanta

Abstract:

Iris-based biometric authentication is gaining importance in recent times. Iris biometric processing however, is a complex process and computationally very expensive. In the overall processing of iris biometric in an iris-based biometric authentication system, feature processing is an important task. In feature processing, we extract iris features, which are ultimately used in matching. Since there is a large number of iris features and computational time increases as the number of features increases, it is therefore a challenge to develop an iris processing system with as few as possible number of features and at the same time without compromising the correctness. In this paper, we address this issue and present an approach to feature extraction and feature matching process. We apply Daubechies D4 wavelet with 4 levels to extract features from iris images. These features are encoded with 2 bits by quantizing into 4 quantization levels. With our proposed approach it is possible to represent an iris template with only 304 bits, whereas existing approaches require as many as 1024 bits. In addition, we assign different weights to different iris region to compare two iris templates which significantly increases the accuracy. Further, we match the iris template based on a weighted similarity measure. Experimental results on several iris databases substantiate the efficacy of our approach.

Keywords: Iris recognition, biometric, feature processing, patternrecognition, pattern matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2140
399 Statistical Distributions of the Lapped Transform Coefficients for Images

Authors: Vijay Kumar Nath, Deepika Hazarika, Anil Mahanta,

Abstract:

Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.

Keywords: Lapped orthogonal transform, Lapped biorthogonal transform, Image compression, KS test,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
398 Optimization Approach to Estimate Hammerstein–Wiener Nonlinear Blocks in Presence of Noise and Disturbance

Authors: Leili Esmaeilani, Jafar Ghaisari, Mohsen Ahmadian

Abstract:

Hammerstein–Wiener model is a block-oriented model where a linear dynamic system is surrounded by two static nonlinearities at its input and output and could be used to model various processes. This paper contains an optimization approach method for analysing the problem of Hammerstein–Wiener systems identification. The method relies on reformulate the identification problem; solve it as constraint quadratic problem and analysing its solutions. During the formulation of the problem, effects of adding noise to both input and output signals of nonlinear blocks and disturbance to linear block, in the emerged equations are discussed. Additionally, the possible parametric form of matrix operations to reduce the equation size is presented. To analyse the possible solutions to the mentioned system of equations, a method to reduce the difference between the number of equations and number of unknown variables by formulate and importing existing knowledge about nonlinear functions is presented. Obtained equations are applied to an instance H–W system to validate the results and illustrate the proposed method.

Keywords: Identification, Hammerstein-Wiener, optimization, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799
397 Numerical Analysis of the SIR-SI Differential Equations with Application to Dengue Disease Mapping in Kuala Lumpur, Malaysia

Authors: N. A. Samat, D. F. Percy

Abstract:

The main aim of this study is to describe and introduce a method of numerical analysis in obtaining approximate solutions for the SIR-SI differential equations (susceptible-infectiverecovered for human populations; susceptible-infective for vector populations) that represent a model for dengue disease transmission. Firstly, we describe the ordinary differential equations for the SIR-SI disease transmission models. Then, we introduce the numerical analysis of solutions of this continuous time, discrete space SIR-SI model by simplifying the continuous time scale to a densely populated, discrete time scale. This is followed by the application of this numerical analysis of solutions of the SIR-SI differential equations to the estimation of relative risk using continuous time, discrete space dengue data of Kuala Lumpur, Malaysia. Finally, we present the results of the analysis, comparing and displaying the results in graphs, table and maps. Results of the numerical analysis of solutions that we implemented offers a useful and potentially superior model for estimating relative risks based on continuous time, discrete space data for vector borne infectious diseases specifically for dengue disease. 

Keywords: Dengue disease, disease mapping, numerical analysis, SIR-SI differential equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2686
396 A Review on Thermal Conductivity of Bio-Based Carbon Nanotubes

Authors: Gloria A. Adewumi, Andrew C. Eloka-Eboka, Freddie L. Inambao

Abstract:

Bio-based carbon nanotubes (CNTs) have received considerable research attention due to their comparative advantages of high level stability, simplistic use, low toxicity and overall environmental friendliness. New potentials for improvement in heat transfer applications are presented due to their high aspect ratio, high thermal conductivity and special surface area. Phonons have been identified as being responsible for thermal conductivities in carbon nanotubes. Therefore, understanding the mechanism of heat conduction in CNTs involves investigating the difference between the varieties of phonon modes and knowing the kinds of phonon modes that play the dominant role. In this review, a reference to a different number of studies is made and in addition, the role of phonon relaxation rate mainly controlled by boundary scattering and three-phonon Umklapp scattering process was investigated. Results show that the phonon modes are sensitive to a number of nanotube conditions such as: diameter, length, temperature, defects and axial strain. At a low temperature (<100K) the thermal conductivity increases with increasing temperature. A small nanotube size causes phonon quantization which is evident in the thermal conductivity at low temperatures.

Keywords: Carbon nanotubes, phonons, thermal conductivity, umklapp process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
395 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity

Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Mujeeb Ur Rehman, Saifur Rahman Sabuj

Abstract:

This paper examines relationships between solar activity and earthquakes, it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity, and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to effect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth

.

Keywords: K-Nearest Neighbour, Support Vector Regression, Random Forest Regression, Long Short-Term Memory Network, earthquakes, solar activity, sunspot number, solar wind, solar flares.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202
394 The Relationship of Private Savings and Economic Growth: Case of Croatia

Authors: Irena Palić

Abstract:

The main objective of the research in this paper is to empirically assess the causal relationship of private savings and economic growth in the Republic of Croatia. Households’ savings are approximated by household deposits in banks, while domestic income is approximated by industrial production volume indices. Vector Autoregression model and Granger causality tests are used to in order to analyse the relationship among private savings and economic growth. Since ADF unit root tests have shown that both mentioned series are non stationary at levels, series are first differenced in order to become stationary. Therefore, VAR model is estimated with percentage change in private savings and percentage change in domestic income, which can be interpreted as economic growth in case of positive percentage change in domestic income. The Granger causality test has shown that there is no causal relationship among private savings and economic growth in Croatia. The impulse response functions have shown that the impact of shock in domestic income on private savings change is stronger than the impact of private saving on growth. Variance decompositions show that both economic growth and private saving change explain the largest part of its own forecast variance. The research has shown that the link between private savings economic and growth in Croatia is weak, what is in line with relevant empirical research in small open economies.

Keywords: Economic growth, Granger causality, innovation analysis, private savings, Vector Autoregression model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
393 Simulation of Lid Cavity Flow in Rectangular, Half-Circular and Beer Bucket Shapes using Quasi-Molecular Modeling

Authors: S. Kulsri, M. Jaroensutasinee, K. Jaroensutasinee

Abstract:

We developed a new method based on quasimolecular modeling to simulate the cavity flow in three cavity shapes: rectangular, half-circular and bucket beer in cgs units. Each quasi-molecule was a group of particles that interacted in a fashion entirely analogous to classical Newtonian molecular interactions. When a cavity flow was simulated, the instantaneous velocity vector fields were obtained by using an inverse distance weighted interpolation method. In all three cavity shapes, fluid motion was rotated counter-clockwise. The velocity vector fields of the three cavity shapes showed a primary vortex located near the upstream corners at time t ~ 0.500 s, t ~ 0.450 s and t ~ 0.350 s, respectively. The configurational kinetic energy of the cavities increased as time increased until the kinetic energy reached a maximum at time t ~ 0.02 s and, then, the kinetic energy decreased as time increased. The rectangular cavity system showed the lowest kinetic energy, while the half-circular cavity system showed the highest kinetic energy. The kinetic energy of rectangular, beer bucket and half-circular cavities fluctuated about stable average values 35.62 x 103, 38.04 x 103 and 40.80 x 103 ergs/particle, respectively. This indicated that the half-circular shapes were the most suitable shape for a shrimp pond because the water in shrimp pond flows best when we compared with rectangular and beer bucket shape.

Keywords: Quasi-molecular modelling, particle modelling, lid driven cavity flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
392 Feature Reduction of Nearest Neighbor Classifiers using Genetic Algorithm

Authors: M. Analoui, M. Fadavi Amiri

Abstract:

The design of a pattern classifier includes an attempt to select, among a set of possible features, a minimum subset of weakly correlated features that better discriminate the pattern classes. This is usually a difficult task in practice, normally requiring the application of heuristic knowledge about the specific problem domain. The selection and quality of the features representing each pattern have a considerable bearing on the success of subsequent pattern classification. Feature extraction is the process of deriving new features from the original features in order to reduce the cost of feature measurement, increase classifier efficiency, and allow higher classification accuracy. Many current feature extraction techniques involve linear transformations of the original pattern vectors to new vectors of lower dimensionality. While this is useful for data visualization and increasing classification efficiency, it does not necessarily reduce the number of features that must be measured since each new feature may be a linear combination of all of the features in the original pattern vector. In this paper a new approach is presented to feature extraction in which feature selection, feature extraction, and classifier training are performed simultaneously using a genetic algorithm. In this approach each feature value is first normalized by a linear equation, then scaled by the associated weight prior to training, testing, and classification. A knn classifier is used to evaluate each set of feature weights. The genetic algorithm optimizes a vector of feature weights, which are used to scale the individual features in the original pattern vectors in either a linear or a nonlinear fashion. By this approach, the number of features used in classifying can be finely reduced.

Keywords: Feature reduction, genetic algorithm, pattern classification, nearest neighbor rule classifiers (k-NNR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
391 A New Image Psychovisual Coding Quality Measurement based Region of Interest

Authors: M. Nahid, A. Bajit, A. Tamtaoui, E. H. Bouyakhf

Abstract:

To model the human visual system (HVS) in the region of interest, we propose a new objective metric evaluation adapted to wavelet foveation-based image compression quality measurement, which exploits a foveation setup filter implementation technique in the DWT domain, based especially on the point and region of fixation of the human eye. This model is then used to predict the visible divergences between an original and compressed image with respect to this region field and yields an adapted and local measure error by removing all peripheral errors. The technique, which we call foveation wavelet visible difference prediction (FWVDP), is demonstrated on a number of noisy images all of which have the same local peak signal to noise ratio (PSNR), but visibly different errors. We show that the FWVDP reliably predicts the fixation areas of interest where error is masked, due to high image contrast, and the areas where the error is visible, due to low image contrast. The paper also suggests ways in which the FWVDP can be used to determine a visually optimal quantization strategy for foveation-based wavelet coefficients and to produce a quantitative local measure of image quality.

Keywords: Human Visual System, Image Quality, ImageCompression, foveation wavelet, region of interest ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
390 Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

Authors: B. Elshafei, X. Mao

Abstract:

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Keywords: Data fusion, Gaussian process regression, signal denoise, temporal extrapolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
389 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1553
388 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 800
387 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 371
386 Evaluation of Ensemble Classifiers for Intrusion Detection

Authors: M. Govindarajan

Abstract:

One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection. 

Keywords: Data mining, ensemble, radial basis function, support vector machine, accuracy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700
385 Development of A Jacobean Model for A 4-Axes Indigenously Developed SCARA System

Authors: T.C.Manjunath, C. Ardil

Abstract:

This paper deals with the development of a Jacobean model for a 4-axes indigenously developed scara robot arm in the laboratory. This model is used to study the relation between the velocities and the forces in the robot while it is doing the pick and place operation.

Keywords: SCARA, Jacobean, Tool Configuration Vector, Computer Control , Visual Basic , Interfacing , Drivers,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3261
384 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia

Authors: N. A. Samat, S. H. Mohd Imam Ma’arof

Abstract:

Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.

Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3294
383 Semantic Enhanced Social Media Sentiments for Stock Market Prediction

Authors: K. Nirmala Devi, V. Murali Bhaskaran

Abstract:

Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.

Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2800
382 Transformations between Bivariate Polynomial Bases

Authors: Dimitris Varsamis, Nicholas Karampetakis

Abstract:

It is well known, that any interpolating polynomial p (x, y) on the vector space Pn,m of two-variable polynomials with degree less than n in terms of x and less than m in terms of y, has various representations that depends on the basis of Pn,m that we select i.e. monomial, Newton and Lagrange basis e.t.c.. The aim of this short note is twofold : a) to present transformations between the coordinates of the polynomial p (x, y) in the aforementioned basis and b) to present transformations between these bases.

Keywords: Bivariate interpolation polynomial, Polynomial basis, Transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2282
381 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454
380 Exchange Rate Volatility, Its Determinants and Effects on the Manufacturing Sector in Nigeria

Authors: Chimaobi V. Okolo, Onyinye S. Ugwuanyi, Kenneth A. Okpala

Abstract:

This study evaluated the effect of exchange rate volatility on the manufacturing sector of Nigeria. The flow and stock market theories of exchange rate determination was adopted considering macroeconomic determinants such as balance of trade, trade openness, and net international investment. Furthermore, the influence of changes in parallel exchange rate, official exchange rate and real effective exchange rate was modeled on the manufacturing sector output. Vector autoregression techniques and vector error correction mechanism were adopted to explore the macroeconomic determinants of exchange rate fluctuation in Nigeria and to examine the influence of exchange rate volatility on the manufacturing sector output in Nigeria. The exchange rate showed an unstable and volatile movement in Nigeria. Official exchange rate significantly impacted on the manufacturing sector of Nigeria and shock to previous manufacturing sector output caused 60.76% of the fluctuation in the manufacturing sector output in Nigeria. Trade balance, trade openness and net international investments did not significantly determine exchange rate in Nigeria. However, own shock accounted for about 95% of the variation of exchange rate fluctuation in the short-run and long-run. Among other macroeconomic variables, net international investment accounted for about 2.85% variation of the real effective exchange rate fluctuation in the short-run and in the long-run. Monetary authorities should maintain stability of the exchange rates through proper management so as to encourage local production and government should formulate and implement policies that will develop other sectors of the economy as this will widen the country’s revenue base, reduce our over reliance on oil sector for our foreign exchange earnings and in turn reduce the shocks on our domestic economy.

Keywords: Exchange rate volatility, exchange rate determinants, manufacturing sector, official exchange rate, parallel exchange rate, real effective exchange rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930
379 New Adaptive Linear Discriminante Analysis for Face Recognition with SVM

Authors: Mehdi Ghayoumi

Abstract:

We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. Recognition rate with new algorithm is compared with gradient.

Keywords: lda, adaptive, svm, face recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
378 Image Transmission via Iterative Cellular-Turbo System

Authors: Ersin Gose, Kenan Buyukatak, Onur Osman, Osman N. Ucan

Abstract:

To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.

Keywords: Iterative Cellular Image Processing Algorithm (ICIPA), Turbo Coding, Iterative Cellular Turbo System (IC-TS), Image Compression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
377 Impedance of an Encircling Coil due to a Cylindrical Tube with Varying Properties

Authors: Valentina Koliskina

Abstract:

Change in impedance of an encircling coil is obtained in the present paper for the case where the electric conductivity and magnetic permeability of a metal cylindrical tube depend on the radial coordinate. The system of equations for the vector potential is solved by means of the Fourier cosine transform. The solution is expressed in terms of improper integral containing modified Bessel functions of complex order.

Keywords: Eddy currents, magnetic permeability, Besselfunctions

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
376 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
375 Analysis of Filtering in Stochastic Systems on Continuous-Time Memory Observations in the Presence of Anomalous Noises

Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov

Abstract:

For optimal unbiased filter as mean-square and in the case of functioning anomalous noises in the observation memory channel, we have proved insensitivity of filter to inaccurate knowledge of the anomalous noise intensity matrix and its equivalence to truncated filter plotted only by non anomalous components of an observation vector.

Keywords: Mathematical expectation, filtration, anomalous noise, memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
374 Rice cDNA Encoding PROLM is Capable of Rescuing Salt Sensitive Yeast Phenotypes G19 and Axt3K from Salt Stress

Authors: Prasad Senadheera, Younousse Saidi, Frans JM Maathuis

Abstract:

Rice seed expression (cDNA) library in the Lambda Zap 11® phage constructed from the developing grain 10-20 days after flowering was transformed into yeast for functional complementation assays in three salt sensitive yeast mutants S. cerevisiae strain CY162, G19 and Axt3K. Transformed cells of G19 and Axt3K with pYES vector with cDNA inserts showed enhance tolerance than those with empty pYes vector. Sequencing of the cDNA inserts revealed that they encode for the putative proteins with the sequence homologous to rice putative protein PROLM24 (Os06g31070), a prolamin precursor. Expression of this cDNA did not affect yeast growth in absence of salt. Axt3k and G19 strains expressing the PROLM24 were able to grow upto 400 mM and 600 mM of NaCl respectively. Similarly, Axt3k mutant with PROLM24 expression showed comparatively higher growth rate in the medium with excess LiCl (50 mM). The observation that expression of PROLM24 rescued the salt sensitive phenotypes of G19 and Axt3k indicates the existence of a regulatory system that ameliorates the effect of salt stress in the transformed yeast mutants. However, the exact function of the cDNA sequence, which shows partial sequence homology to yeast UTR1 is not clear. Although UTR1 involved in ferrous uptake and iron homeostasis in yeast cells, there is no evidence to prove its role in Na+ homeostasis in yeast cells. Absence of transmembrane regions in Os06g31070 protein indicates that salt tolerance is achieved not through the direct functional complementation of the mutant genes but through an alternative mechanism.

Keywords: Rice seed expression, salt stress, prolamin, salinitytolerance, Oryza sativa

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
373 Neural Network Based Predictive DTC Algorithm for Induction Motors

Authors: N.Vahdatifar, Ss.Mortazavi, R.Kianinezhad

Abstract:

In this paper, a Neural Network based predictive DTC algorithm is proposed .This approach is used as an alternative to classical approaches .An appropriate riate Feed - forward network is chosen and based on its value of derivative electromagnetic torque ; optimal stator voltage vector is determined to be applied to the induction motor (by inverter). Moreover, an appropriate torque and flux observer is proposed.

Keywords: Neural Networks, Predictive DTC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1392
372 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2051