Search results for: classification framework.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2495

Search results for: classification framework.

1955 Transfigurative Changes of Governmental Responsibility

Authors: Ákos Cserny

Abstract:

The unequivocal increase of the area of operation of the executive power can happen with the appearance of new areas to be influenced and its integration in the power, or at the expense of the scopes of other organs with public authority. The extension of the executive can only be accepted within the framework of the rule of law if parallel with this process we get constitutional guarantees that the exercise of power is kept within constitutional framework. Failure to do so, however, may result in the lack, deficit of democracy and democratic sense, and may cause an overwhelming dominance of the executive power. Therefore, the aim of this paper is to present executive power and responsibility in the context of different dimensions.

Keywords: Confidence, constitution, executive power, liability, parliamentarism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 902
1954 A New History Based Method to Handle the Recurring Concept Shifts in Data Streams

Authors: Hossein Morshedlou, Ahmad Abdollahzade Barforoush

Abstract:

Recent developments in storage technology and networking architectures have made it possible for broad areas of applications to rely on data streams for quick response and accurate decision making. Data streams are generated from events of real world so existence of associations, which are among the occurrence of these events in real world, among concepts of data streams is logical. Extraction of these hidden associations can be useful for prediction of subsequent concepts in concept shifting data streams. In this paper we present a new method for learning association among concepts of data stream and prediction of what the next concept will be. Knowing the next concept, an informed update of data model will be possible. The results of conducted experiments show that the proposed method is proper for classification of concept shifting data streams.

Keywords: Data Stream, Classification, Concept Shift, History.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1266
1953 An Improvement of Multi-Label Image Classification Method Based on Histogram of Oriented Gradient

Authors: Ziad Abdallah, Mohamad Oueidat, Ali El-Zaart

Abstract:

Image Multi-label Classification (IMC) assigns a label or a set of labels to an image. The big demand for image annotation and archiving in the web attracts the researchers to develop many algorithms for this application domain. The existing techniques for IMC have two drawbacks: The description of the elementary characteristics from the image and the correlation between labels are not taken into account. In this paper, we present an algorithm (MIML-HOGLPP), which simultaneously handles these limitations. The algorithm uses the histogram of gradients as feature descriptor. It applies the Label Priority Power-set as multi-label transformation to solve the problem of label correlation. The experiment shows that the results of MIML-HOGLPP are better in terms of some of the evaluation metrics comparing with the two existing techniques.

Keywords: Data mining, information retrieval system, multi-label, problem transformation, histogram of gradients.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1296
1952 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
1951 Agent-Based Simulation and Analysis of Network-Centric Air Defense Missile Systems

Authors: Su-Yan Tang, Wei Zhang, Shan Mei, Yi-Fan Zhu

Abstract:

Network-Centric Air Defense Missile Systems (NCADMS) represents the superior development of the air defense missile systems and has been regarded as one of the major research issues in military domain at present. Due to lack of knowledge and experience on NCADMS, modeling and simulation becomes an effective approach to perform operational analysis, compared with those equation based ones. However, the complex dynamic interactions among entities and flexible architectures of NCADMS put forward new requirements and challenges to the simulation framework and models. ABS (Agent-Based Simulations) explicitly addresses modeling behaviors of heterogeneous individuals. Agents have capability to sense and understand things, make decisions, and act on the environment. They can also cooperate with others dynamically to perform the tasks assigned to them. ABS proves an effective approach to explore the new operational characteristics emerging in NCADMS. In this paper, based on the analysis of network-centric architecture and new cooperative engagement strategies for NCADMS, an agent-based simulation framework by expanding the simulation framework in the so-called System Effectiveness Analysis Simulation (SEAS) was designed. The simulation framework specifies components, relationships and interactions between them, the structure and behavior rules of an agent in NCADMS. Based on scenario simulations, information and decision superiority and operational advantages in NCADMS were analyzed; meanwhile some suggestions were provided for its future development.

Keywords: air defense missile systems, network-centric, agent-based simulation, simulation framework, information superiority, decision superiority, operational advantages

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2266
1950 Investigation on Feature Extraction and Classification of Medical Images

Authors: P. Gnanasekar, A. Nagappan, S. Sharavanan, O. Saravanan, D. Vinodkumar, T. Elayabharathi, G. Karthik

Abstract:

In this paper we present the deep study about the Bio- Medical Images and tag it with some basic extracting features (e.g. color, pixel value etc). The classification is done by using a nearest neighbor classifier with various distance measures as well as the automatic combination of classifier results. This process selects a subset of relevant features from a group of features of the image. It also helps to acquire better understanding about the image by describing which the important features are. The accuracy can be improved by increasing the number of features selected. Various types of classifications were evolved for the medical images like Support Vector Machine (SVM) which is used for classifying the Bacterial types. Ant Colony Optimization method is used for optimal results. It has high approximation capability and much faster convergence, Texture feature extraction method based on Gabor wavelets etc..

Keywords: ACO Ant Colony Optimization, Correlogram, CCM Co-Occurrence Matrix, RTS Rough-Set theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2994
1949 A Crisis Communication Network Based on Embodied Conversational Agents System with Mobile Services

Authors: Ong Sing Goh, C. Ardil, Chun Che Fung, Kok Wai Wong, Arnold Depickere

Abstract:

In this paper, we proposed a new framework to incorporate an intelligent agent software robot into a crisis communication portal (CCNet) in order to send alert news to subscribed users via email and other mobile services such as Short Message Service (SMS), Multimedia Messaging Service (MMS) and General Packet Radio Services (GPRS). The content on the mobile services can be delivered either through mobile phone or Personal Digital Assistance (PDA). This research has shown that with our proposed framework, the embodied conversation agents system can handle questions intelligently with our multilayer architecture. At the same time, the extended framework can take care of delivery content through a more humanoid interface on mobile devices.

Keywords: Crisis Communication Network (CCNet), EmbodiedConversational Agents (ECAs), Mobile Services, ArtificialIntelligence Neural-network Identity (AINI)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2183
1948 Non-negative Principal Component Analysis for Face Recognition

Authors: Zhang Yan, Yu Bin

Abstract:

Principle component analysis is often combined with the state-of-art classification algorithms to recognize human faces. However, principle component analysis can only capture these features contributing to the global characteristics of data because it is a global feature selection algorithm. It misses those features contributing to the local characteristics of data because each principal component only contains some levels of global characteristics of data. In this study, we present a novel face recognition approach using non-negative principal component analysis which is added with the constraint of non-negative to improve data locality and contribute to elucidating latent data structures. Experiments are performed on the Cambridge ORL face database. We demonstrate the strong performances of the algorithm in recognizing human faces in comparison with PCA and NREMF approaches.

Keywords: classification, face recognition, non-negativeprinciple component analysis (NPCA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
1947 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection

Authors: Yaojun Wang, Yaoqing Wang

Abstract:

Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.

Keywords: Case-based reasoning, decision tree, stock selection, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1946 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: Biometrics, identity verification, genetic data, k-nearest neighbor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1101
1945 Roof Material Detection Based on Object-Based Approach Using WorldView-2 Satellite Imagery

Authors: Ebrahim Taherzadeh, Helmi Z. M. Shafri, Kaveh Shahi

Abstract:

One of the most important tasks in urban remote sensing is the detection of impervious surfaces (IS), such as roofs and roads. However, detection of IS in heterogeneous areas still remains one of the most challenging tasks. In this study, detection of concrete roof using an object-based approach was proposed. A new rule-based classification was developed to detect concrete roof tile. This proposed rule-based classification was applied to WorldView-2 image and results showed that the proposed rule has good potential to predict concrete roof material from WorldView-2 images, with 85% accuracy.

Keywords: Urban remote sensing, impervious surface, Object- Based, Roof Material, Concrete tile, WorldView-2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3779
1944 Video Classification by Partitioned Frequency Spectra of Repeating Movements

Authors: Kahraman Ayyildiz, Stefan Conrad

Abstract:

In this paper we present a system for classifying videos by frequency spectra. Many videos contain activities with repeating movements. Sports videos, home improvement videos, or videos showing mechanical motion are some example areas. Motion of these areas usually repeats with a certain main frequency and several side frequencies. Transforming repeating motion to its frequency domain via FFT reveals these frequencies. Average amplitudes of frequency intervals can be seen as features of cyclic motion. Hence determining these features can help to classify videos with repeating movements. In this paper we explain how to compute frequency spectra for video clips and how to use them for classifying. Our approach utilizes series of image moments as a function. This function again is transformed into its frequency domain.

Keywords: action recognition, frequency feature, motion recognition, repeating movement, video classification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
1943 Pictorial Multimodal Analysis of Selected Paintings of Salvador Dali

Authors: Shaza Melies, Abeer Refky, Nihad Mansoor

Abstract:

Multimodality involves the communication between verbal and visual components in various discourses. A painting represents a form of communication between the artist and the viewer in terms of colors, shades, objects, and the title. This paper aims to present how multimodality can be used to decode the verbal and visual dimensions a painting holds. For that purpose, this study uses Kress and van Leeuwen’s theoretical framework of visual grammar for the analysis of the multimodal semiotic resources of selected paintings of Salvador Dali. This study investigates the visual decoding of the selected paintings of Salvador Dali and analyzing their social and political meanings using Kress and van Leeuwen’s framework of visual grammar. The paper attempts to answer the following questions: 1. How far can multimodality decode the verbal and non-verbal meanings of surrealistic art? 2. How can Kress and van Leeuwen’s theoretical framework of visual grammar be applied to analyze Dali’s paintings? 3. To what extent is Kress and van Leeuwen’s theoretical framework of visual grammar apt to deliver political and social messages of Dali? The paper reached the following findings: the framework’s descriptive tools (representational, interactive, and compositional meanings) can be used to analyze the paintings’ title and their visual elements. Social and political messages were delivered by appropriate usage of color, gesture, vectors, modality, and the way social actors were represented.

Keywords: Multimodality, multimodal analysis, paintings analysis, Salvador Dali, visual grammar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 725
1942 Lean Environmental Management Integration System (LEMIS) Framework Development

Authors: Puvanasvaran, A. P., Suresh V., N. Norazlin

Abstract:

The Lean Environmental Management Integration System (LEMIS) framework development is integration between lean core element and ISO 14001. The curiosity on the relationship between continuous improvement and sustainability of lean implementation has influenced this study toward LEMIS. Characteristic of ISO 14001 standard clauses and core elements of lean principles are explored from past studies and literature reviews. Survey was carried out on ISO 14001 certified companies to examine continual improvement by implementing the ISO 14001 standard. The study found that there is a significant and positive relationship between Lean Principles: value, value stream, flow, pull and perfection with the ISO 14001 requirements. LEMIS is significant to support the continuous improvement and sustainability. The integration system can be implemented to any manufacturing company. It gives awareness on the importance on why organizations need to sustain its environmental management system. In the meantime, the lean principle can be adapted in order to streamline daily activities of the company. Throughout the study, it had proven that there is no sacrifice or trade-off between lean principles with ISO 14001 requirements. The framework developed in the study can be further simplified in the future, especially the method of crossing each sub requirements of ISO 14001 standard with the core elements of Lean principles in this study.

Keywords: LEMIS, ISO 14001, integration, framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2369
1941 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system.

Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, Fault location, Underground Cable, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2725
1940 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: Imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
1939 Framework for Government ICT Projects

Authors: Manal Rayes

Abstract:

In its efforts to utilize the information and communication technology to enhance the quality of public service delivery, national and local governments around the world are competing to introduce more ICT applications as tools to automate processes related to law enforcement or policy execution, increase citizen orientation, trust, and satisfaction, and create one-stop-shops for public services. In its implementation, e-Government ICTs need to maintain transparency, participation, and collaboration. Due to this diverse of mixed goals and requirements, e-Government systems need to be designed based on special design considerations in order to eliminate the risks of failure to compliance to government regulations, citizen dissatisfaction, or market repulsion. In this article we suggest a framework with guidelines for designing government information systems that takes into consideration the special requirements of the public sector. Then we introduce two case studies and show how applying those guidelines would result in a more solid system design.

Keywords: e-government, framework, guidelines, system design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638
1938 Estimation Model of Dry Docking Duration Using Data Mining

Authors: Isti Surjandari, Riara Novita

Abstract:

Maintenance is one of the most important activities in the shipyard industry. However, sometimes it is not supported by adequate services from the shipyard, where inaccuracy in estimating the duration of the ship maintenance is still common. This makes estimation of ship maintenance duration is crucial. This study uses Data Mining approach, i.e., CART (Classification and Regression Tree) to estimate the duration of ship maintenance that is limited to dock works or which is known as dry docking. By using the volume of dock works as an input to estimate the maintenance duration, 4 classes of dry docking duration were obtained with different linear model and job criteria for each class. These linear models can then be used to estimate the duration of dry docking based on job criteria.

Keywords: Classification and regression tree (CART), data mining, dry docking, maintenance duration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2420
1937 A Framework for the Evaluation of Infrastructures’ Serviceability

Authors: Kyonghoon Kim, Wonyoung Park, Taeil Park

Abstract:

Aging infrastructures became a serious social problem. This brought out the increased need for the legislation of a new strict guideline for infrastructure management. Although existing guidelines provided basics of how to evaluate and manage the condition of infrastructures, they needed improvements for their evaluation procedures. Most guidelines mainly focused on the structural condition of infrastructures and did not properly reflect service aspects of infrastructures such as performance, public demand, capacity, etc., which were significantly valuable to public. Regardless of the importance, these factors were often neglected in infrastructure evaluations, because they were quite subjective and difficult to quantify in rational manner. Thus, this study proposed a framework to properly identify and evaluate the service indicators. This study showed that service indicators could be grouped into two categories and properly evaluated using AHP and Fuzzy. Overall, proposed framework is expected to assist governmental agency in establishing effective investment strategies for infrastructure improvements.

Keywords: Infrastructure, evaluation, serviceability, fuzzy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
1936 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: Fractional differential (FD), Computed Tomography (CT), fusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
1935 Clustering Multivariate Empiric Characteristic Functions for Multi-Class SVM Classification

Authors: María-Dolores Cubiles-de-la-Vega, Rafael Pino-Mejías, Esther-Lydia Silva-Ramírez

Abstract:

A dissimilarity measure between the empiric characteristic functions of the subsamples associated to the different classes in a multivariate data set is proposed. This measure can be efficiently computed, and it depends on all the cases of each class. It may be used to find groups of similar classes, which could be joined for further analysis, or it could be employed to perform an agglomerative hierarchical cluster analysis of the set of classes. The final tree can serve to build a family of binary classification models, offering an alternative approach to the multi-class SVM problem. We have tested this dendrogram based SVM approach with the oneagainst- one SVM approach over four publicly available data sets, three of them being microarray data. Both performances have been found equivalent, but the first solution requires a smaller number of binary SVM models.

Keywords: Cluster Analysis, Empiric Characteristic Function, Multi-class SVM, R.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
1934 Using the Technology-Organization-Environment Framework and Zuboff’s Concepts for Understanding Environmental Sustainability and RFID: Two Case Studies

Authors: Rebecca Angeles

Abstract:

Radio frequency identification (RFID) has been recognized as a key enabler of efficient and effective supply chains. Recently, with increasing concern for environmental sustainability, researchers and practitioners have been exploring the role of RFID in supporting “green supply chains.” This qualitative study uses the technology-organization-environment framework of Tornatzky and Fleischer, and Zuboff’s concepts of automating-informating-transformating in analyzing two case studies involving RFID use: the recycling of Hewlett Packard inkjet printers and the garbage and recycling program of the City of Grand Rapids, Michigan.

Keywords: Environmental sustainability, green supply chain management, radio frequency identification, technology-organization-environment framework, Zuboff’automate-informate-transformate concepts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5678
1933 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: Balance sheet indicators, Bancassurance, business models, ward algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1250
1932 Target Signal Detection Using MUSIC Spectrum in Noise Environment

Authors: Sangjun Park, Sangbae Jeong, Moonsung Han, Minsoo hahn

Abstract:

In this paper, a target signal detection method using multiple signal classification (MUSIC) algorithm is proposed. The MUSIC algorithm is a subspace-based direction of arrival (DOA) estimation method. The algorithm detects the DOAs of multiple sources using the inverse of the eigenvalue-weighted eigen spectra. To apply the algorithm to target signal detection for GSC-based beamforming, we utilize its spectral response for the target DOA in noisy conditions. For evaluation of the algorithm, the performance of the proposed target signal detection method is compared with that of the normalized cross-correlation (NCC), the fixed beamforming, and the power ratio method. Experimental results show that the proposed algorithm significantly outperforms the conventional ones in receiver operating characteristics(ROC) curves.

Keywords: Beamforming, direction of arrival, multiple signal classification, target signal detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
1931 Hybrid Neural Network Methods for Lithology Identification in the Algerian Sahara

Authors: S. Chikhi, M. Batouche, H. Shout

Abstract:

In this paper, we combine a probabilistic neural method with radial-bias functions in order to construct the lithofacies of the wells DF01, DF02 and DF03 situated in the Triassic province of Algeria (Sahara). Lithofacies is a crucial problem in reservoir characterization. Our objective is to facilitate the experts' work in geological domain and to allow them to obtain quickly the structure and the nature of lands around the drilling. This study intends to design a tool that helps automatic deduction from numerical data. We used a probabilistic formalism to enhance the classification process initiated by a Self-Organized Map procedure. Our system gives lithofacies, from well-log data, of the concerned reservoir wells in an aspect easy to read by a geology expert who identifies the potential for oil production at a given source and so forms the basis for estimating the financial returns and economic benefits.

Keywords: Classification, Lithofacies, Probabilistic formalism, Reservoir characterization, Well-log data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
1930 Classification Based on Deep Neural Cellular Automata Model

Authors: Yasser F. Hassan

Abstract:

Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it.

Keywords: Cellular automata, neural cellular automata, deep learning, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 843
1929 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
1928 An Optimal Feature Subset Selection for Leaf Analysis

Authors: N. Valliammal, S.N. Geethalakshmi

Abstract:

This paper describes an optimal approach for feature subset selection to classify the leaves based on Genetic Algorithm (GA) and Kernel Based Principle Component Analysis (KPCA). Due to high complexity in the selection of the optimal features, the classification has become a critical task to analyse the leaf image data. Initially the shape, texture and colour features are extracted from the leaf images. These extracted features are optimized through the separate functioning of GA and KPCA. This approach performs an intersection operation over the subsets obtained from the optimization process. Finally, the most common matching subset is forwarded to train the Support Vector Machine (SVM). Our experimental results successfully prove that the application of GA and KPCA for feature subset selection using SVM as a classifier is computationally effective and improves the accuracy of the classifier.

Keywords: Optimization, Feature extraction, Feature subset, Classification, GA, KPCA, SVM and Computation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2221
1927 The Effect of IT Service Quality Attributes on Supply Chain Management and Performance

Authors: Narasimhaiah Gorla, Annibal Scavarda

Abstract:

Nowadays, where most of the leading economies are service oriented and e-business is being widely used for their management, supply chain management has become one of the most studied and practiced fields. Quality has an important role on today-s business processes, so it is important to understand the impact of IT service quality on the performance of supply chains. This paper will start by analyzing the Supply Chain Operations Reference (SCOR) model and each of its five activities: Plan, Source, Make, Delivery, and Return. This article proposes a framework for analyzing Effect of IT Service Quality on Supply Chain Performance. Using the proposed framework, hypotheses are framed for the direct effect of IT service quality on Supply Chain Performance and its indirect effect through effective Supply Chain Management. The framework will be validated empirically based on the surveys of executives of various organizations and statistical analyses of the data collected.

Keywords: IT service quality, SCOR model, Supply Chain Management, Supply Chain Performance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927
1926 Organizational Dimensions as Determinant Factors of KM Approaches in SMEs

Authors: Mehdi Shami Zanjani, Shohreh Mehrasa, Mandana Modiri

Abstract:

In the current economy of increasing global competition, many organizations are attempting to use knowledge as one of the means to gain sustainable competitive advantage. Besides large organizations, the success of SMEs can be linked to how well they manage their knowledge. Despite the profusion of research about knowledge management within large organizations, fewer studies tried to analyze KM in SMEs. This research proposes a new framework showing the determinant role of organizational dimensions onto KM approaches. The paper and its propositions are based on a literature review and analysis. In this research, personalization versus codification, individualization versus institutionalization and IT-based versus non IT-based are highlighted as three distinct dimensions of knowledge management approaches. The study contributes to research by providing a more nuanced classification of KM approaches and provides guidance to managers about the types of KM approaches that should be adopted based on the size, geographical dispersion and task nature of SMEs. To the author-s knowledge, the paper is the first of its kind to examine if there are suitable configurations of KM approaches for SMEs with different dimensions. It gives valuable information, which hopefully will help SME sector to accomplish KM.

Keywords: Knowledge Management, Knowledge Management Approach, SME, Organizational Dimension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1585