Search results for: gaussian selection operator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2982

Search results for: gaussian selection operator

2922 Weighted Rank Regression with Adaptive Penalty Function

Authors: Kang-Mo Jung

Abstract:

The use of regularization for statistical methods has become popular. The least absolute shrinkage and selection operator (LASSO) framework has become the standard tool for sparse regression. However, it is well known that the LASSO is sensitive to outliers or leverage points. We consider a new robust estimation which is composed of the weighted loss function of the pairwise difference of residuals and the adaptive penalty function regulating the tuning parameter for each variable. Rank regression is resistant to regression outliers, but not to leverage points. By adopting a weighted loss function, the proposed method is robust to leverage points of the predictor variable. Furthermore, the adaptive penalty function gives us good statistical properties in variable selection such as oracle property and consistency. We develop an efficient algorithm to compute the proposed estimator using basic functions in program R. We used an optimal tuning parameter based on the Bayesian information criterion (BIC). Numerical simulation shows that the proposed estimator is effective for analyzing real data set and contaminated data.

Keywords: adaptive penalty function, robust penalized regression, variable selection, weighted rank regression

Procedia PDF Downloads 450
2921 Determining of Importance Level of Factors Affecting Job Selection with the Method of AHP

Authors: Nurullah Ekmekci, Ömer Akkaya, Kazım Karaboğa, Mahmut Tekin

Abstract:

Job selection is one of the most important decisions that affect their lives in the name of being more useful to themselves and the society. There are many criteria to consider in the job selection. The amount of criteria in the job selection makes it a multi-criteria decision-making (MCDM) problem. In this study; job selection has been discussed as multi-criteria decision-making problem and has been solved by Analytic Hierarchy Process (AHP), one of the multi-criteria decision making methods. A survey, contains 5 different job selection criteria (finding a job friendliness, salary status, job , social security, work in the community deems reputation and business of the degree of difficulty) within many job selection criteria and 4 different job alternative (being academician, working at the civil service, working at the private sector and working at in their own business), has been conducted to the students of Selcuk University Faculty of Economics and Administrative Sciences. As a result of pairwise comparisons, the highest weighted criteria in the job selection and the most coveted job preferences were identified.

Keywords: analytical hierarchy process, job selection, multi-criteria, decision making

Procedia PDF Downloads 384
2920 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 142
2919 Lee-Carter Mortality Forecasting Method with Dynamic Normal Inverse Gaussian Mortality Index

Authors: Funda Kul, İsmail Gür

Abstract:

Pension scheme providers have to price mortality risk by accurate mortality forecasting method. There are many mortality-forecasting methods constructed and used in literature. The Lee-Carter model is the first model to consider stochastic improvement trends in life expectancy. It is still precisely used. Mortality forecasting is done by mortality index in the Lee-Carter model. It is assumed that mortality index fits ARIMA time series model. In this paper, we propose and use dynamic normal inverse gaussian distribution to modeling mortality indes in the Lee-Carter model. Using population mortality data for Italy, France, and Turkey, the model is forecasting capability is investigated, and a comparative analysis with other models is ensured by some well-known benchmarking criterions.

Keywords: mortality, forecasting, lee-carter model, normal inverse gaussian distribution

Procedia PDF Downloads 345
2918 Operator Splitting Scheme for the Inverse Nagumo Equation

Authors: Sharon-Yasotha Veerayah-Mcgregor, Valipuram Manoranjan

Abstract:

A backward or inverse problem is known to be an ill-posed problem due to its instability that easily emerges with any slight change within the conditions of the problem. Therefore, only a limited number of numerical approaches are available to solve a backward problem. This paper considers the Nagumo equation, an equation that describes impulse propagation in nerve axons, which also models population growth with the Allee effect. A creative operator splitting numerical scheme is constructed to solve the inverse Nagumo equation. Computational simulations are used to verify that this scheme is stable, accurate, and efficient.

Keywords: inverse/backward equation, operator-splitting, Nagumo equation, ill-posed, finite-difference

Procedia PDF Downloads 80
2917 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: group decision making, intuitionistic fuzzy set, intuitionistic fuzzy entropy measure, vendor selection, VIKOR

Procedia PDF Downloads 141
2916 Facility Anomaly Detection with Gaussian Mixture Model

Authors: Sunghoon Park, Hank Kim, Jinwon An, Sungzoon Cho

Abstract:

Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.

Keywords: facility anomaly detection, gaussian mixture model, anomaly score, expectation maximization algorithm

Procedia PDF Downloads 262
2915 Machine Learning Assisted Prediction of Sintered Density of Binary W(MO) Alloys

Authors: Hexiong Liu

Abstract:

Powder metallurgy is the optimal method for the consolidation and preparation of W(Mo) alloys, which exhibit excellent application prospects at high temperatures. The properties of W(Mo) alloys are closely related to the sintered density. However, controlling the sintered density and porosity of these alloys is still challenging. In the past, the regulation methods mainly focused on time-consuming and costly trial-and-error experiments. In this study, the sintering data for more than a dozen W(Mo) alloys constituted a small-scale dataset, including both solid and liquid phases of sintering. Furthermore, simple descriptors were used to predict the sintered density of W(Mo) alloys based on the descriptor selection strategy and machine learning method (ML), where the ML algorithm included the least absolute shrinkage and selection operator (Lasso) regression, k-nearest neighbor (k-NN), random forest (RF), and multi-layer perceptron (MLP). The results showed that the interpretable descriptors extracted by our proposed selection strategy and the MLP neural network achieved a high prediction accuracy (R>0.950). By further predicting the sintered density of W(Mo) alloys using different sintering processes, the error between the predicted and experimental values was less than 0.063, confirming the application potential of the model.

Keywords: sintered density, machine learning, interpretable descriptors, W(Mo) alloy

Procedia PDF Downloads 66
2914 Generalization of Tsallis Entropy from a Q-Deformed Arithmetic

Authors: J. Juan Peña, J. Morales, J. García-Ravelo, J. García-Martínez

Abstract:

It is known that by introducing alternative forms of exponential and logarithmic functions, the Tsallis entropy Sᵩ is itself a generalization of Shannon entropy S. In this work, from a deformation through a scaling function applied to the differential operator, it is possible to generate a q-deformed calculus as well as a q-deformed arithmetic, which not only allows generalizing the exponential and logarithmic functions but also any other standard function. The updated q-deformed differential operator leads to an updated integral operator under which the functions are integrated together with a weight function. For each differentiable function, it is possible to identify its q-deformed partner, which is useful to generalize other algebraic relations proper of the original functions. As an application of this proposal, in this work, a generalization of exponential and logarithmic functions is studied in such a way that their relationship with the thermodynamic functions, particularly the entropy, allows us to have a q-deformed expression of these. As a result, from a particular scaling function applied to the differential operator, a q-deformed arithmetic is obtained, leading to the generalization of the Tsallis entropy.

Keywords: q-calculus, q-deformed arithmetic, entropy, exponential functions, thermodynamic functions

Procedia PDF Downloads 62
2913 Selection Standards for National Teams: Theory and Practice

Authors: Alexey Kulik

Abstract:

This article deals with selection standards for national sport teams. The author examines the legal framework for selection criteria and suggests using the most honest criteria.

Keywords: national teams, standards of forming teams, selection standards, sport legislations

Procedia PDF Downloads 498
2912 Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery

Authors: S. Benkraouda, Z. Djelloul-Khedda, B. Yagoubi

Abstract:

we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.

Keywords: forest fire, forest fire detection, satellite image, normal distribution, theoretical gaussian model, thermal infrared matrix image

Procedia PDF Downloads 129
2911 Additive White Gaussian Noise Filtering from ECG by Wiener Filter and Median Filter: A Comparative Study

Authors: Hossein Javidnia, Salehe Taheri

Abstract:

The Electrocardiogram (ECG) is the recording of the heart’s electrical potential versus time. ECG signals are often contaminated with noise such as baseline wander and muscle noise. As these signals have been widely used in clinical studies to detect heart diseases, it is essential to filter these noises. In this paper we compare performance of Wiener Filtering and Median Filtering methods to filter Additive White Gaussian (AWG) noise with the determined signal to noise ratio (SNR) ranging from 3 to 5 dB applied to long-term ECG recordings samples. Root mean square error (RMSE) and coefficient of determination (R2) between the filtered ECG and original ECG was used as the filter performance indicator. Experimental results show that Wiener filter has better noise filtering performance than Median filter.

Keywords: ECG noise filtering, Wiener filtering, median filtering, Gaussian noise, filtering performance

Procedia PDF Downloads 514
2910 The Role of Recruitment and Selection in Financial Performance of Enterprises in Kosovo

Authors: Arta Jashari, Enver Kutllovci

Abstract:

Abstract— The purpose of this study is to examine the relationship of recruitment and selection practice and performance in medium service enterprises in Kosovo. A total of 110 managers from public and private sector was analyzed. Our empirical results show that enterprises in Kosovo use recruitment and selection practice and they know how important is to have the right people with skills and knowledge accordingly with the job requirements. The outcome of Pearson correlation analysis provides evidence that recruitment and selection practice, positively and significantly influence the financial performance. Also, our results show a significant relationship between the education of managers and the use of the recruitment and selection practice. From our results we can conclude and suggest that with a good recruiting and selection, the organization will fill with a group of potentially qualified candidates who will be able to fulfill the enterprises objective.

Keywords: Human Resource, Kosovo, Recruitment and Selection, Performance

Procedia PDF Downloads 146
2909 Hybrid Feature Selection Method for Sentiment Classification of Movie Reviews

Authors: Vishnu Goyal, Basant Agarwal

Abstract:

Sentiment analysis research provides methods for identifying the people’s opinion written in blogs, reviews, social networking websites etc. Sentiment analysis is to understand what opinion people have about any given entity, object or thing. Sentiment analysis research can be broadly categorised into three types of approaches i.e. semantic orientation, machine learning and lexicon based approaches. Feature selection methods improve the performance of the machine learning algorithms by eliminating the irrelevant features. Information gain feature selection method has been considered best method for sentiment analysis; however, it has the drawback of selection of threshold. Therefore, in this paper, we propose a hybrid feature selection methods comprising of information gain and proposed feature selection method. Initially, features are selected using Information Gain (IG) and further more noisy features are eliminated using the proposed feature selection method. Experimental results show the efficiency of the proposed feature selection methods.

Keywords: feature selection, sentiment analysis, hybrid feature selection

Procedia PDF Downloads 318
2908 Item-Trait Pattern Recognition of Replenished Items in Multidimensional Computerized Adaptive Testing

Authors: Jianan Sun, Ziwen Ye

Abstract:

Multidimensional computerized adaptive testing (MCAT) is a popular research topic in psychometrics. It is important for practitioners to clearly know the item-trait patterns of administered items when a test like MCAT is operated. Item-trait pattern recognition refers to detecting which latent traits in a psychological test are measured by each of the specified items. If the item-trait patterns of the replenished items in MCAT item pool are well detected, the interpretability of the items can be improved, which can further promote the abilities of the examinees who attending the MCAT to be accurately estimated. This research explores to solve the item-trait pattern recognition problem of the replenished items in MCAT item pool from the perspective of statistical variable selection. The popular multidimensional item response theory model, multidimensional two-parameter logistic model, is assumed to fit the response data of MCAT. The proposed method uses the least absolute shrinkage and selection operator (LASSO) to detect item-trait patterns of replenished items based on the essential information of item responses and ability estimates of examinees collected from a designed MCAT procedure. Several advantages of the proposed method are outlined. First, the proposed method does not strictly depend on the relative order between the replenished items and the selected operational items, so it allows the replenished items to be mixed into the operational items in reasonable order such as considering content constraints or other test requirements. Second, the LASSO used in this research improves the interpretability of the multidimensional replenished items in MCAT. Third, the proposed method can exert the advantage of shrinkage method idea for variable selection, so it can help to check item quality and key dimension features of replenished items and saves more costs of time and labors in response data collection than traditional factor analysis method. Moreover, the proposed method makes sure the dimensions of replenished items are recognized to be consistent with the dimensions of operational items in MCAT item pool. Simulation studies are conducted to investigate the performance of the proposed method under different conditions for varying dimensionality of item pool, latent trait correlation, item discrimination, test lengths and item selection criteria in MCAT. Results show that the proposed method can accurately detect the item-trait patterns of the replenished items in the two-dimensional and the three-dimensional item pool. Selecting enough operational items from the item pool consisting of high discriminating items by Bayesian A-optimality in MCAT can improve the recognition accuracy of item-trait patterns of replenished items for the proposed method. The pattern recognition accuracy for the conditions with correlated traits is better than those with independent traits especially for the item pool consisting of comparatively low discriminating items. To sum up, the proposed data-driven method based on the LASSO can accurately and efficiently detect the item-trait patterns of replenished items in MCAT.

Keywords: item-trait pattern recognition, least absolute shrinkage and selection operator, multidimensional computerized adaptive testing, variable selection

Procedia PDF Downloads 116
2907 Importance of Location Selection of an Energy Storage System in a Smart Grid

Authors: Vanaja Rao

Abstract:

In the recent times, the need for the integration of Renewable Energy Sources (RES) in a Smart Grid is on the rise. As a result of this, associated energy storage systems are known to play important roles in sustaining the efficient operation of such RES like wind power and solar power. This paper investigates the importance of location selection of Energy Storage Systems (ESSs) in a Smart Grid. Three scenarios of ESS location is studied and analyzed in a Smart Grid, which are – 1. Near the generation/source, 2. In the middle of the Grid and, 3. Near the demand/consumption. This is explained with the aim of assisting any Distribution Network Operator (DNO) in deploying the ESSs in a power network, which will significantly help reduce the costs and time of planning and avoid any damages incurred as a result of installing them at an incorrect location of a Smart Grid. To do this, the outlined scenarios mentioned above are modelled and analyzed with the National Grid’s datasets of energy generation and consumption in the UK power network. As a result, the outcome of this analysis aims to provide a better overview for the location selection of the ESSs in a Smart Grid. This ensures power system stability and security along with the optimum usage of the ESSs.

Keywords: distribution networks, energy storage system, energy security, location planning, power stability, smart grid

Procedia PDF Downloads 286
2906 Comparison of Crossover Types to Obtain Optimal Queries Using Adaptive Genetic Algorithm

Authors: Wafa’ Alma'Aitah, Khaled Almakadmeh

Abstract:

this study presents an information retrieval system of using genetic algorithm to increase information retrieval efficiency. Using vector space model, information retrieval is based on the similarity measurement between query and documents. Documents with high similarity to query are judge more relevant to the query and should be retrieved first. Using genetic algorithms, each query is represented by a chromosome; these chromosomes are fed into genetic operator process: selection, crossover, and mutation until an optimized query chromosome is obtained for document retrieval. Results show that information retrieval with adaptive crossover probability and single point type crossover and roulette wheel as selection type give the highest recall. The proposed approach is verified using (242) proceedings abstracts collected from the Saudi Arabian national conference.

Keywords: genetic algorithm, information retrieval, optimal queries, crossover

Procedia PDF Downloads 280
2905 Competence-Based Human Resources Selection and Training: Making Decisions

Authors: O. Starineca, I. Voronchuk

Abstract:

Human Resources (HR) selection and training have various implementation possibilities depending on an organization’s abilities and peculiarities. We propose to base HR selection and training decisions about on a competence-based approach. HR selection and training of employees are topical as there is room for improvement in this field; therefore, the aim of the research is to propose rational decision-making approaches for an organization HR selection and training choice. Our proposals are based on the training development and competence-based selection approaches created within previous researches i.e. Analytic-Hierarchy Process (AHP) and Linear Programming. Literature review on non-formal education, competence-based selection, AHP form our theoretical background. Some educational service providers in Latvia offer employees training, e.g. motivation, computer skills, accounting, law, ethics, stress management, etc. that are topical for Public Administration. Competence-based approach is a rational base for rational decision-making in both HR selection and considering HR training.

Keywords: competence-based selection, human resource, training, decision-making

Procedia PDF Downloads 316
2904 Supplier Selection by Bi-Objectives Mixed Integer Program Approach

Authors: K.-H. Yang

Abstract:

In the past, there was a lot of excellent research studies conducted on topics related to supplier selection. Because the considered factors of supplier selection are complicated and difficult to be quantified, most researchers deal supplier selection issues by qualitative approaches. Compared to qualitative approaches, quantitative approaches are less applicable in the real world. This study tried to apply the quantitative approach to study a supplier selection problem with considering operation cost and delivery reliability. By those factors, this study applies Normalized Normal Constraint Method to solve the dual objectives mixed integer program of the supplier selection problem.

Keywords: bi-objectives MIP, normalized normal constraint method, supplier selection, quantitative approach

Procedia PDF Downloads 402
2903 Design of Orientation-Free Handler and Fuzzy Controller for Wire-Driven Heavy Object Lifting System

Authors: Bo-Wei Song, Yun-Jung Lee

Abstract:

This paper presents an intention interface and controller for a wire-driven heavy object lifting system that assists the operator with moving a heavy object. The handler is designed to allow a comfortable working posture for the operator. Plus, as a human assistive system, the operator is involved in the control loop, where a fuzzy control system is used to consider the human control characteristics. The effectiveness and performance of the proposed system are proved by experiments.

Keywords: fuzzy controller, handler design, heavy object lifting system, human-assistive device, human-in-the-loop system

Procedia PDF Downloads 500
2902 An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System

Authors: John Lorenzo Bautista, Yoon-Joong Kim

Abstract:

This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.

Keywords: Filipino language, Hidden Markov Model, HTK system, speech recognition

Procedia PDF Downloads 468
2901 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data

Authors: Jian-Heng Wu, Bor-Shen Lin

Abstract:

The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.

Keywords: water mass, Gaussian mixture model, data visualization, system framework

Procedia PDF Downloads 131
2900 A Genetic Algorithm for the Load Balance of Parallel Computational Fluid Dynamics Computation with Multi-Block Structured Mesh

Authors: Chunye Gong, Ming Tie, Jie Liu, Weimin Bao, Xinbiao Gan, Shengguo Li, Bo Yang, Xuguang Chen, Tiaojie Xiao, Yang Sun

Abstract:

Large-scale CFD simulation relies on high-performance parallel computing, and the load balance is the key role which affects the parallel efficiency. This paper focuses on the load-balancing problem of parallel CFD simulation with structured mesh. A mathematical model for this load-balancing problem is presented. The genetic algorithm, fitness computing, two-level code are designed. Optimal selector, robust operator, and local optimization operator are designed. The properties of the presented genetic algorithm are discussed in-depth. The effects of optimal selector, robust operator, and local optimization operator are proved by experiments. The experimental results of different test sets, DLR-F4, and aircraft design applications show the presented load-balancing algorithm is robust, quickly converged, and is useful in real engineering problems.

Keywords: genetic algorithm, load-balancing algorithm, optimal variation, local optimization

Procedia PDF Downloads 163
2899 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection

Authors: Yaojun Wang, Yaoqing Wang

Abstract:

Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.

Keywords: case-based reasoning, decision tree, stock selection, machine learning

Procedia PDF Downloads 404
2898 A Time-Varying and Non-Stationary Convolution Spectral Mixture Kernel for Gaussian Process

Authors: Kai Chen, Shuguang Cui, Feng Yin

Abstract:

Gaussian process (GP) with spectral mixture (SM) kernel demonstrates flexible non-parametric Bayesian learning ability in modeling unknown function. In this work a novel time-varying and non-stationary convolution spectral mixture (TN-CSM) kernel with a significant enhancing of interpretability by using process convolution is introduced. A way decomposing the SM component into an auto-convolution of base SM component and parameterizing it to be input dependent is outlined. Smoothly, performing a convolution between two base SM component yields a novel structure of non-stationary SM component with much better generalized expression and interpretation. The TN-CSM perfectly allows compatibility with the stationary SM kernel in terms of kernel form and spectral base ignored and confused by previous non-stationary kernels. On synthetic and real-world datatsets, experiments show the time-varying characteristics of hyper-parameters in TN-CSM and compare the learning performance of TN-CSM with popular and representative non-stationary GP.

Keywords: Gaussian process, spectral mixture, non-stationary, convolution

Procedia PDF Downloads 180
2897 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model

Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park

Abstract:

In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.

Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset

Procedia PDF Downloads 342
2896 Processing Big Data: An Approach Using Feature Selection

Authors: Nikat Parveen, M. Ananthi

Abstract:

Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.

Keywords: big data, key value, feature selection, retrieval, performance

Procedia PDF Downloads 329
2895 Gaussian Mixture Model Based Identification of Arterial Wall Movement for Computation of Distension Waveform

Authors: Ravindra B. Patil, P. Krishnamoorthy, Shriram Sethuraman

Abstract:

This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

Keywords: distension waveform, Gaussian Mixture Model, RF ultrasound, arterial wall movement

Procedia PDF Downloads 493
2894 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 206
2893 Using New Machine Algorithms to Classify Iranian Musical Instruments According to Temporal, Spectral and Coefficient Features

Authors: Ronak Khosravi, Mahmood Abbasi Layegh, Siamak Haghipour, Avin Esmaili

Abstract:

In this paper, a study on classification of musical woodwind instruments using a small set of features selected from a broad range of extracted ones by the sequential forward selection method was carried out. Firstly, we extract 42 features for each record in the music database of 402 sound files belonging to five different groups of Flutes (end blown and internal duct), Single –reed, Double –reed (exposed and capped), Triple reed and Quadruple reed. Then, the sequential forward selection method is adopted to choose the best feature set in order to achieve very high classification accuracy. Two different classification techniques of support vector machines and relevance vector machines have been tested out and an accuracy of up to 96% can be achieved by using 21 time, frequency and coefficient features and relevance vector machine with the Gaussian kernel function.

Keywords: coefficient features, relevance vector machines, spectral features, support vector machines, temporal features

Procedia PDF Downloads 306