Search results for: wrapper based feature selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29194

Search results for: wrapper based feature selection

28954 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis

Authors: Tawfik Thelaidjia, Salah Chenikher

Abstract:

Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approach

Keywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement

Procedia PDF Downloads 409
28953 ACBM: Attention-Based CNN and Bi-LSTM Model for Continuous Identity Authentication

Authors: Rui Mao, Heming Ji, Xiaoyu Wang

Abstract:

Keystroke dynamics are widely used in identity recognition. It has the advantage that the individual typing rhythm is difficult to imitate. It also supports continuous authentication through the keyboard without extra devices. The existing keystroke dynamics authentication methods based on machine learning have a drawback in supporting relatively complex scenarios with massive data. There are drawbacks to both feature extraction and model optimization in these methods. To overcome the above weakness, an authentication model of keystroke dynamics based on deep learning is proposed. The model uses feature vectors formed by keystroke content and keystroke time. It ensures efficient continuous authentication by cooperating attention mechanisms with the combination of CNN and Bi-LSTM. The model has been tested with Open Data Buffalo dataset, and the result shows that the FRR is 3.09%, FAR is 3.03%, and EER is 4.23%. This proves that the model is efficient and accurate on continuous authentication.

Keywords: keystroke dynamics, identity authentication, deep learning, CNN, LSTM

Procedia PDF Downloads 125
28952 Vision Based People Tracking System

Authors: Boukerch Haroun, Luo Qing Sheng, Li Hua Shi, Boukraa Sebti

Abstract:

In this paper we present the design and the implementation of a target tracking system where the target is set to be a moving person in a video sequence. The system can be applied easily as a vision system for mobile robot. The system is composed of two major parts the first is the detection of the person in the video frame using the SVM learning machine based on the “HOG” descriptors. The second part is the tracking of a moving person it’s done by using a combination of the Kalman filter and a modified version of the Camshift tracking algorithm by adding the target motion feature to the color feature, the experimental results had shown that the new algorithm had overcame the traditional Camshift algorithm in robustness and in case of occlusion.

Keywords: camshift algorithm, computer vision, Kalman filter, object tracking

Procedia PDF Downloads 418
28951 A Two Tailed Secretary Problem with Multiple Criteria

Authors: Alaka Padhye, S. P. Kane

Abstract:

The following study considers some variations made to the secretary problem (SP). In a multiple criteria secretary problem (MCSP), the selection of a unit is based on two independent characteristics. The units that appear before an observer are known say N, the best rank of a unit being N. A unit is selected, if it is better with respect to either first or second or both the characteristics. When the number of units is large and due to constraints like time and cost, the observer might want to stop earlier instead of inspecting all the available units. Let the process terminate at r2th unit where r1Keywords: joint distribution, marginal distribution, real ranks, secretary problem, selection criterion, two tailed secretary problem

Procedia PDF Downloads 251
28950 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman

Abstract:

With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.

Keywords: band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation

Procedia PDF Downloads 328
28949 The Influence of Noise on Aerial Image Semantic Segmentation

Authors: Pengchao Wei, Xiangzhong Fang

Abstract:

Noise is ubiquitous in this world. Denoising is an essential technology, especially in image semantic segmentation, where noises are generally categorized into two main types i.e. feature noise and label noise. The main focus of this paper is aiming at modeling label noise, investigating the behaviors of different types of label noise on image semantic segmentation tasks using K-Nearest-Neighbor and Convolutional Neural Network classifier. The performance without label noise and with is evaluated and illustrated in this paper. In addition to that, the influence of feature noise on the image semantic segmentation task is researched as well and a feature noise reduction method is applied to mitigate its influence in the learning procedure.

Keywords: convolutional neural network, denoising, feature noise, image semantic segmentation, k-nearest-neighbor, label noise

Procedia PDF Downloads 187
28948 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 250
28947 Advanced Technologies and Algorithms for Efficient Portfolio Selection

Authors: Konstantinos Liagkouras, Konstantinos Metaxiotis

Abstract:

In this paper we present a classification of the various technologies applied for the solution of the portfolio selection problem according to the discipline and the methodological framework followed. We provide a concise presentation of the emerged categories and we are trying to identify which methods considered obsolete and which lie at the heart of the debate. On top of that, we provide a comparative study of the different technologies applied for efficient portfolio construction and we suggest potential paths for future work that lie at the intersection of the presented techniques.

Keywords: portfolio selection, optimization techniques, financial models, stochastic, heuristics

Procedia PDF Downloads 401
28946 Capturing the Stress States in Video Conferences by Photoplethysmographic Pulse Detection

Authors: Jarek Krajewski, David Daxberger

Abstract:

We propose a stress detection method based on an RGB camera using heart rate detection, also known as Photoplethysmography Imaging (PPGI). This technique focuses on the measurement of the small changes in skin colour caused by blood perfusion. A stationary lab setting with simulated video conferences is chosen using constant light conditions and a sampling rate of 30 fps. The ground truth measurement of heart rate is conducted with a common PPG system. The proposed approach for pulse peak detection is based on a machine learning-based approach, applying brute force feature extraction for the prediction of heart rate pulses. The statistical analysis showed good agreement (correlation r = .79, p<0.05) between the reference heart rate system and the proposed method. Based on these findings, the proposed method could provide a reliable, low-cost, and contactless way of measuring HR parameters in daily-life environments.

Keywords: heart rate, PPGI, machine learning, brute force feature extraction

Procedia PDF Downloads 100
28945 Segmentation of Arabic Handwritten Numeral Strings Based on Watershed Approach

Authors: Nidal F. Shilbayeh, Remah W. Al-Khatib, Sameer A. Nooh

Abstract:

Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system.  This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.

Keywords: handwritten numerals, segmentation, courtesy amount, feature extraction, numeral recognition

Procedia PDF Downloads 352
28944 An Object-Based Image Resizing Approach

Authors: Chin-Chen Chang, I-Ta Lee, Tsung-Ta Ke, Wen-Kai Tai

Abstract:

Common methods for resizing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image resizing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.

Keywords: energy map, visual saliency, gradient map, seam carving

Procedia PDF Downloads 453
28943 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph algorithm, graph code

Procedia PDF Downloads 129
28942 An Adjusted Network Information Criterion for Model Selection in Statistical Neural Network Models

Authors: Christopher Godwin Udomboso, Angela Unna Chukwu, Isaac Kwame Dontwi

Abstract:

In selecting a Statistical Neural Network model, the Network Information Criterion (NIC) has been observed to be sample biased, because it does not account for sample sizes. The selection of a model from a set of fitted candidate models requires objective data-driven criteria. In this paper, we derived and investigated the Adjusted Network Information Criterion (ANIC), based on Kullback’s symmetric divergence, which has been designed to be an asymptotically unbiased estimator of the expected Kullback-Leibler information of a fitted model. The analyses show that on a general note, the ANIC improves model selection in more sample sizes than does the NIC.

Keywords: statistical neural network, network information criterion, adjusted network, information criterion, transfer function

Procedia PDF Downloads 532
28941 Ranking of the Main Criteria for Contractor Selection Procedures on Major Construction Projects in Libya Using the Delphi Method

Authors: Othoman Elsayah, Naren Gupta, Binsheng Zhang

Abstract:

The construction sector constitutes one of the most important sectors in the economy of any country. Contractor selection is a critical decision that is undertaken by client organizations and is central to the success of any construction project. Contractor selection (CS) is a process which involves investigating, screening and determining whether candidate contractors have the technical and financial capability to be accepted to formally tender for construction work. The process should be conducted prior to the award of contract, characterized by many factors such as: contactor’s skills, experience on similar projects, track- record in the industry, and financial stability. However, this paper evaluates the current state of knowledge in relation to contractor selection process and demonstrates the findings from the analysis of the data collected from the Delphi questionnaire survey. The survey was conducted with a group of 12 experts working in the Libyan construction industry (LCI). The paper starts by briefly explaining the general outline of the questionnaire including the survey participation rate, the different fields the experts came from, and the business titles of the participants. Then, the paper describes the tests used to determine when the experts had reached consensus. The paper is based on research which aims to develop rank contractor selection criteria with specific application to make construction projects in the Libyan context. The findings of this study will be utilized to establish the scope of work that will be used as part of a PhD research.

Keywords: contractor selection, Libyan construction industry, decision experts, Delphi technique

Procedia PDF Downloads 299
28940 Vendor Selection and Supply Quotas Determination by Using Revised Weighting Method and Multi-Objective Programming Methods

Authors: Tunjo Perič, Marin Fatović

Abstract:

In this paper a new methodology for vendor selection and supply quotas determination (VSSQD) is proposed. The problem of VSSQD is solved by the model that combines revised weighting method for determining the objective function coefficients, and a multiple objective linear programming (MOLP) method based on the cooperative game theory for VSSQD. The criteria used for VSSQD are: (1) purchase costs and (2) product quality supplied by individual vendors. The proposed methodology is tested on the example of flour purchase for a bakery with two decision makers.

Keywords: cooperative game theory, multiple objective linear programming, revised weighting method, vendor selection

Procedia PDF Downloads 328
28939 Site Selection of CNG Station by Using FUZZY-AHP Model (Case Study: Gas Zone 4, Tehran City Iran)

Authors: Hamidrza Joodaki

Abstract:

The most complex issue in urban land use planning is site selection that needs to assess the verity of elements and factors. Multi Criteria Decision Making (MCDM) methods are the best approach to deal with complex problems. In this paper, combination of the analytical hierarchy process (AHP) model and FUZZY logic was used as MCDM methods to select the best site for gas station in the 4th gas zone of Tehran. The first and the most important step in FUZZY-AHP model is selection of criteria and sub-criteria. Population, accessibility, proximity and natural disasters were considered as the main criteria in this study. After choosing the criteria, they were weighted based on AHP by EXPERT CHOICE software, and FUZZY logic was used to enhance accuracy and to approach the reality. After these steps, criteria layers were produced and weighted based on FUZZY-AHP model in GIS. Finally, through ARC GIS software, the layers were integrated and the 4th gas zone in TEHRAN was selected as the best site to locate gas station.

Keywords: multiple criteria decision making (MCDM), analytic hierarchy process (AHP), FUZZY logic, geographic information system (GIS)

Procedia PDF Downloads 327
28938 Evaluation and Selection of Construction Contractors by Polish Public Clients

Authors: Kozik Renata, Leśniak Agnieszka, Plebankiewicz Edyta

Abstract:

Contracting authorities in the public sector are obligated to apply the principles provided for in the Polish law for the evaluation and selection of contractors. To analyze the methods of contractors, applied in practice by public clients, the notices of contract award results for construction works were analyzed. The analysis shows that the procedure selected more and more often is open to competitive bidding, where the assessment of the competence of contractors is not very precise, as well as non-competitive bidding, i.e. single source procurement. The share of procurement procedures, where the only criterion is price, is increasing. The solution to the problems existing here might be the introduction of one of the forms of pre-selection of contractors. The article also briefly discusses verification systems for companies applying for public contracts used in EU countries.

Keywords: certification, contractors selection, open tendering, public investors

Procedia PDF Downloads 258
28937 Proposal of a Model Supporting Decision-Making on Information Security Risk Treatment

Authors: Ritsuko Kawasaki, Takeshi Hiromatsu

Abstract:

Management is required to understand all information security risks within an organization, and to make decisions on which information security risks should be treated in what level by allocating how much amount of cost. However, such decision-making is not usually easy, because various measures for risk treatment must be selected with the suitable application levels. In addition, some measures may have objectives conflicting with each other. It also makes the selection difficult. Therefore, this paper provides a model which supports the selection of measures by applying multi-objective analysis to find an optimal solution. Additionally, a list of measures is also provided to make the selection easier and more effective without any leakage of measures.

Keywords: information security risk treatment, selection of risk measures, risk acceptance, multi-objective optimization

Procedia PDF Downloads 348
28936 Methodology for the Selection of Chemical Textile Products

Authors: Oscar F. Toro, Alexia Pardo Figueroa, Brigitte M. Larico

Abstract:

The development of new processes in the textile industry entails designing methodologies to select adequate supplies that fit these new processes requirements. This paper presents a methodology to select chemicals that fulfill a new process technical specifications. The proposed methodology involves three major phases: (1) Data collection of chemical products, (2) Qualitative pre-selection and (3) Laboratory tests. We have applied this methodology to the selection of a binder which will form a protective film above the textile fibers and bond them. Our findings were that, there exist five possible products that can be used in our new process: Arkofil, Elvanol, Size plus A, Size plus AC and Starch. This new methodology has both qualitative and experimental variables, and can be used to select supplies for new textile processes.

Keywords: binder, chemical products, selection methodology, textile supplies, textile fiber

Procedia PDF Downloads 264
28935 Polarity Classification of Social Media Comments in Turkish

Authors: Migena Ceyhan, Zeynep Orhan, Dimitrios Karras

Abstract:

People in modern societies are continuously sharing their experiences, emotions, and thoughts in different areas of life. The information reaches almost everyone in real-time and can have an important impact in shaping people’s way of living. This phenomenon is very well recognized and advantageously used by the market representatives, trying to earn the most from this means. Given the abundance of information, people and organizations are looking for efficient tools that filter the countless data into important information, ready to analyze. This paper is a modest contribution in this field, describing the process of automatically classifying social media comments in the Turkish language into positive or negative. Once data is gathered and preprocessed, feature sets of selected single words or groups of words are build according to the characteristics of language used in the texts. These features are used later to train, and test a system according to different machine learning algorithms (Naïve Bayes, Sequential Minimal Optimization, J48, and Bayesian Linear Regression). The resultant high accuracies can be important feedback for decision-makers to improve the business strategies accordingly.

Keywords: feature selection, machine learning, natural language processing, sentiment analysis, social media reviews

Procedia PDF Downloads 121
28934 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information

Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu

Abstract:

In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.

Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness

Procedia PDF Downloads 88
28933 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 122
28932 Terrain Classification for Ground Robots Based on Acoustic Features

Authors: Bernd Kiefer, Abraham Gebru Tesfay, Dietrich Klakow

Abstract:

The motivation of our work is to detect different terrain types traversed by a robot based on acoustic data from the robot-terrain interaction. Different acoustic features and classifiers were investigated, such as Mel-frequency cepstral coefficient and Gamma-tone frequency cepstral coefficient for the feature extraction, and Gaussian mixture model and Feed forward neural network for the classification. We analyze the system’s performance by comparing our proposed techniques with some other features surveyed from distinct related works. We achieve precision and recall values between 87% and 100% per class, and an average accuracy at 95.2%. We also study the effect of varying audio chunk size in the application phase of the models and find only a mild impact on performance.

Keywords: acoustic features, autonomous robots, feature extraction, terrain classification

Procedia PDF Downloads 335
28931 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 280
28930 A Deep Learning Approach to Subsection Identification in Electronic Health Records

Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan

Abstract:

Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.

Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification

Procedia PDF Downloads 182
28929 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method

Authors: Arwa Alzughaibi

Abstract:

Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.

Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization

Procedia PDF Downloads 229
28928 A Survey of Feature-Based Steganalysis for JPEG Images

Authors: Syeda Mainaaz Unnisa, Deepa Suresh

Abstract:

Due to the increase in usage of public domain channels, such as the internet, and communication technology, there is a concern about the protection of intellectual property and security threats. This interest has led to growth in researching and implementing techniques for information hiding. Steganography is the art and science of hiding information in a private manner such that its existence cannot be recognized. Communication using steganographic techniques makes not only the secret message but also the presence of hidden communication, invisible. Steganalysis is the art of detecting the presence of this hidden communication. Parallel to steganography, steganalysis is also gaining prominence, since the detection of hidden messages can prevent catastrophic security incidents from occurring. Steganalysis can also be incredibly helpful in identifying and revealing holes with the current steganographic techniques, which makes them vulnerable to attacks. Through the formulation of new effective steganalysis methods, further research to improve the resistance of tested steganography techniques can be developed. Feature-based steganalysis method for JPEG images calculates the features of an image using the L1 norm of the difference between a stego image and the calibrated version of the image. This calibration can help retrieve some of the parameters of the cover image, revealing the variations between the cover and stego image and enabling a more accurate detection. Applying this method to various steganographic schemes, experimental results were compared and evaluated to derive conclusions and principles for more protected JPEG steganography.

Keywords: cover image, feature-based steganalysis, information hiding, steganalysis, steganography

Procedia PDF Downloads 185
28927 Face Sketch Recognition in Forensic Application Using Scale Invariant Feature Transform and Multiscale Local Binary Patterns Fusion

Authors: Gargi Phadke, Mugdha Joshi, Shamal Salunkhe

Abstract:

Facial sketches are used as a crucial clue by criminal investigators for identification of suspects when the description of eyewitness or victims are only available as evidence. A forensic artist develops a sketch as per the verbal description is given by an eyewitness that shows the facial look of the culprit. In this paper, the fusion of Scale Invariant Feature Transform (SIFT) and multiscale local binary patterns (MLBP) are proposed as a feature to recognize a forensic face sketch images from a gallery of mugshot photos. This work focuses on comparative analysis of proposed scheme with existing algorithms in different challenges like illumination change and rotation condition. Experimental results show that proposed scheme can lead to better performance for the defined problem.

Keywords: SIFT feature, MLBP, PCA, face sketch

Procedia PDF Downloads 301
28926 Unsupervised Feature Learning by Pre-Route Simulation of Auto-Encoder Behavior Model

Authors: Youngjae Jin, Daeshik Kim

Abstract:

This paper describes a cycle accurate simulation results of weight values learned by an auto-encoder behavior model in terms of pre-route simulation. Given the results we visualized the first layer representations with natural images. Many common deep learning threads have focused on learning high-level abstraction of unlabeled raw data by unsupervised feature learning. However, in the process of handling such a huge amount of data, the learning method’s computation complexity and time limited advanced research. These limitations came from the fact these algorithms were computed by using only single core CPUs. For this reason, parallel-based hardware, FPGAs, was seen as a possible solution to overcome these limitations. We adopted and simulated the ready-made auto-encoder to design a behavior model in Verilog HDL before designing hardware. With the auto-encoder behavior model pre-route simulation, we obtained the cycle accurate results of the parameter of each hidden layer by using MODELSIM. The cycle accurate results are very important factor in designing a parallel-based digital hardware. Finally this paper shows an appropriate operation of behavior model based pre-route simulation. Moreover, we visualized learning latent representations of the first hidden layer with Kyoto natural image dataset.

Keywords: auto-encoder, behavior model simulation, digital hardware design, pre-route simulation, Unsupervised feature learning

Procedia PDF Downloads 417
28925 Agent-Base Modeling of IoT Applications by Using Software Product Line

Authors: Asad Abbas, Muhammad Fezan Afzal, Muhammad Latif Anjum, Muhammad Azmat

Abstract:

The Internet of Things (IoT) is used to link up real objects that use the internet to interact. IoT applications allow handling and operating the equipment in accordance with environmental needs, such as transportation and healthcare. IoT devices are linked together via a number of agents that act as a middleman for communications. The operation of a heat sensor differs indoors and outside because agent applications work with environmental variables. In this article, we suggest using Software Product Line (SPL) to model IoT agents and applications' features on an XML-based basis. The contextual diversity within the same domain of application can be handled, and the reusability of features is increased by XML-based feature modelling. For the purpose of managing contextual variability, we have embraced XML for modelling IoT applications, agents, and internet-connected devices.

Keywords: IoT agents, IoT applications, software product line, feature model, XML

Procedia PDF Downloads 57