Search results for: object oriented classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4389

Search results for: object oriented classification

3879 An AK-Chart for the Non-Normal Data

Authors: Chia-Hau Liu, Tai-Yue Wang

Abstract:

Traditional multivariate control charts assume that measurement from manufacturing processes follows a multivariate normal distribution. However, this assumption may not hold or may be difficult to verify because not all the measurement from manufacturing processes are normal distributed in practice. This study develops a new multivariate control chart for monitoring the processes with non-normal data. We propose a mechanism based on integrating the one-class classification method and the adaptive technique. The adaptive technique is used to improve the sensitivity to small shift on one-class classification in statistical process control. In addition, this design provides an easy way to allocate the value of type I error so it is easier to be implemented. Finally, the simulation study and the real data from industry are used to demonstrate the effectiveness of the propose control charts.

Keywords: multivariate control chart, statistical process control, one-class classification method, non-normal data

Procedia PDF Downloads 421
3878 A Graph Library Development Based on the Service-‎Oriented Architecture: Used for Representation of the ‎Biological ‎Systems in the Computer Algorithms

Authors: Mehrshad Khosraviani, Sepehr Najjarpour

Abstract:

Considering the usage of graph-based approaches in systems and synthetic biology, and the various types of ‎the graphs employed by them, a comprehensive graph library based ‎on the three-tier architecture (3TA) was previously introduced for full representation of the biological systems. Although proposing a 3TA-based graph library, three following reasons motivated us to redesign the graph ‎library based on the service-oriented architecture (SOA): (1) Maintaining the accuracy of the data related to an input graph (including its edges, its ‎vertices, its topology, etc.) without involving the end user:‎ Since, in the case of using 3TA, the library files are available to the end users, they may ‎be utilized incorrectly, and consequently, the invalid graph data will be provided to the ‎computer algorithms. However, considering the usage of the SOA, the operation of the ‎graph registration is specified as a service by encapsulation of the library files. In other words, overall control operations needed for registration of the valid data will be the ‎responsibility of the services. (2) Partitioning of the library product into some different parts: Considering 3TA, a whole library product was provided in general. While here, the product ‎can be divided into smaller ones, such as an AND/OR graph drawing service, and each ‎one can be provided individually. As a result, the end user will be able to select any ‎parts of the library product, instead of all features, to add it to a project. (3) Reduction of the complexities: While using 3TA, several other libraries must be needed to add for connecting to the ‎database, responsibility of the provision of the needed library resources in the SOA-‎based graph library is entrusted with the services by themselves. Therefore, the end user ‎who wants to use the graph library is not involved with its complexity. In the end, in order to ‎make ‎the library easier to control in the system, and to restrict the end user from accessing the files, ‎it was preferred to use the service-oriented ‎architecture ‎‎(SOA) over the three-tier architecture (3TA) and to redevelop the previously proposed graph library based on it‎.

Keywords: Bio-Design Automation, Biological System, Graph Library, Service-Oriented Architecture, Systems and Synthetic Biology

Procedia PDF Downloads 310
3877 Electroencephalogram Based Alzheimer Disease Classification using Machine and Deep Learning Methods

Authors: Carlos Roncero-Parra, Alfonso Parreño-Torres, Jorge Mateo Sotos, Alejandro L. Borja

Abstract:

In this research, different methods based on machine/deep learning algorithms are presented for the classification and diagnosis of patients with mental disorders such as alzheimer. For this purpose, the signals obtained from 32 unipolar electrodes identified by non-invasive EEG were examined, and their basic properties were obtained. More specifically, different well-known machine learning based classifiers have been used, i.e., support vector machine (SVM), Bayesian linear discriminant analysis (BLDA), decision tree (DT), Gaussian Naïve Bayes (GNB), K-nearest neighbor (KNN) and Convolutional Neural Network (CNN). A total of 668 patients from five different hospitals have been studied in the period from 2011 to 2021. The best accuracy is obtained was around 93 % in both ADM and ADA classifications. It can be concluded that such a classification will enable the training of algorithms that can be used to identify and classify different mental disorders with high accuracy.

Keywords: alzheimer, machine learning, deep learning, EEG

Procedia PDF Downloads 125
3876 Constraining the Potential Nickel Laterite Area Using Geographic Information System-Based Multi-Criteria Rating in Surigao Del Sur

Authors: Reiner-Ace P. Mateo, Vince Paolo F. Obille

Abstract:

The traditional method of classifying the potential mineral resources requires a significant amount of time and money. In this paper, an alternative way to classify potential mineral resources with GIS application in Surigao del Sur. The three (3) analog map data inputs integrated to GIS are geologic map, topographic map, and land cover/vegetation map. The indicators used in the classification of potential nickel laterite integrated from the analog map data inputs are a geologic indicator, which is the presence of ultramafic rock from the geologic map; slope indicator and the presence of plateau edges from the topographic map; areas of forest land, grassland, and shrublands from the land cover/vegetation map. The potential mineral of the area was classified from low up to very high potential. The produced mineral potential classification map of Surigao del Sur has an estimated 4.63% low nickel laterite potential, 42.15% medium nickel laterite potential, 43.34% high nickel laterite potential, and 9.88% very high nickel laterite from its ultramafic terrains. For the validation of the produced map, it was compared with known occurrences of nickel laterite in the area using a nickel mining tenement map from the area with the application of remote sensing. Three (3) prominent nickel mining companies were delineated in the study area. The generated potential classification map of nickel-laterite in Surigao Del Sur may be of aid to the mining companies which are currently in the exploration phase in the study area. Also, the currently operating nickel mines in the study area can help to validate the reliability of the mineral classification map produced.

Keywords: mineral potential classification, nickel laterites, GIS, remote sensing, Surigao del Sur

Procedia PDF Downloads 121
3875 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: electrical impedance tomography, EIT, surgeon robot, image processing of electrical impedance tomography

Procedia PDF Downloads 270
3874 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 111
3873 Assessment of the Spatio-Temporal Distribution of Pteridium aquilinum (Bracken Fern) Invasion on the Grassland Plateau in Nyika National Park

Authors: Andrew Kanzunguze, Lusayo Mwabumba, Jason K. Gilbertson, Dominic B. Gondwe, George Z. Nxumayo

Abstract:

Knowledge about the spatio-temporal distribution of invasive plants in protected areas provides a base from which hypotheses explaining proliferation of plant invasions can be made alongside development of relevant invasive plant monitoring programs. The aim of this study was to investigate the spatio-temporal distribution of bracken fern on the grassland plateau of Nyika National Park over the past 30 years (1986-2016) as well as to determine the current extent of the invasion. Remote sensing, machine learning, and statistical modelling techniques (object-based image analysis, image classification and linear regression analysis) in geographical information systems were used to determine both the spatial and temporal distribution of bracken fern in the study area. Results have revealed that bracken fern has been increasing coverage on the Nyika plateau at an estimated annual rate of 87.3 hectares since 1986. This translates to an estimated net increase of 2,573.1 hectares, which was recorded from 1,788.1 hectares (1986) to 4,361.9 hectares (2016). As of 2017 bracken fern covered 20,940.7 hectares, approximately 14.3% of the entire grassland plateau. Additionally, it was observed that the fern was distributed most densely around Chelinda camp (on the central plateau) as well as in forest verges and roadsides across the plateau. Based on these results it is recommended that Ecological Niche Modelling approaches be employed to (i) isolate the most important factors influencing bracken fern proliferation as well as (ii) identify and prioritize areas requiring immediate control interventions so as to minimize bracken fern proliferation in Nyika National Park.

Keywords: bracken fern, image classification, Landsat-8, Nyika National Park, spatio-temporal distribution

Procedia PDF Downloads 178
3872 Reduplication in Dhiyan: An Indo-Aryan Language of Assam

Authors: S. Sulochana Singha

Abstract:

Dhiyan or Dehan is the name of the community and language spoken by the Koch-Rajbangshi people of Barak Valley of Assam. Ethnically, they are Mongoloids, and their language belongs to the Indo-Aryan language family. However, Dhiyan is absent in any classification of Indo-Aryan languages. So the classification of Dhiyan language under the Indo-Aryan language family is completely based on the shared typological features of the other Indo-Aryan languages. Typologically, Dhiyan is an agglutinating language, and it shares many features of Indo-Aryan languages like presence of aspirated voiced stops, non-tonal, verb-person agreement, adjectives as different word class, prominent tense and subject object verb word order. Reduplication is a productive word-formation process in Dhiyan. Besides it also expresses plurality, intensification, and distributive. Generally, reduplication in Dhiyan can be at the morphological or lexical level. Morphological reduplication in Dhiyan involves expressives which includes onomatopoeias, sound symbolism, idiophones, and imitatives. Lexical reduplication in the language can be formed by echo formations and word reduplication. Echo formation in Dhiyan is formed by partial repetition from the base word which can be either consonant alternation or vowel alternation. The consonant alternation is basically found in onset position while the alternation of vowel is basically found in open syllable particularly in final syllable. Word reduplication involves reduplication of nouns, interrogatives, adjectives, and numerals which further can be class changing or class maintaining reduplication. The process of reduplication can be partial or complete whether it is lexical or morphological. The present paper is an attempt to describe some aspects of the formation, function, and usage of reduplications in Dhiyan which is mainly spoken in ten villages in the Eastern part of Barak River in the Cachar District of Assam.

Keywords: Barak-Valley, Dhiyan, Indo-Aryan, reduplication

Procedia PDF Downloads 214
3871 Investigation of Topic Modeling-Based Semi-Supervised Interpretable Document Classifier

Authors: Dasom Kim, William Xiu Shun Wong, Yoonjin Hyun, Donghoon Lee, Minji Paek, Sungho Byun, Namgyu Kim

Abstract:

There have been many researches on document classification for classifying voluminous documents automatically. Through document classification, we can assign a specific category to each unlabeled document on the basis of various machine learning algorithms. However, providing labeled documents manually requires considerable time and effort. To overcome the limitations, the semi-supervised learning which uses unlabeled document as well as labeled documents has been invented. However, traditional document classifiers, regardless of supervised or semi-supervised ones, cannot sufficiently explain the reason or the process of the classification. Thus, in this paper, we proposed a methodology to visualize major topics and class components of each document. We believe that our methodology for visualizing topics and classes of each document can enhance the reliability and explanatory power of document classifiers.

Keywords: data mining, document classifier, text mining, topic modeling

Procedia PDF Downloads 401
3870 Analyzing and Determining the Ideal Response Force for Combatting Terrorist Groups

Authors: Erhan Turgut, Salih Ergün, Abdülkadir Öz

Abstract:

Terror is a modern war strategy which uses violence as a means of communication in order to achieve political objectives. In today’s security environment narrowing the propaganda field of terrorist organization is the primary goal for the security forces. In this sense, providing and maintaining public support is the most necessary ability for security units. Rather than enemy and threat-oriented approach, homeland security oriented approach is essential to ensure public support. In this study, terror assumed as a homeland security issue and assigning the law enforcement forces with military status is analyzed.

Keywords: terrorism, counter-terrorism, military status law-enforcement, terrorist groups

Procedia PDF Downloads 459
3869 Automatic Classification for the Degree of Disc Narrowing from X-Ray Images Using CNN

Authors: Kwangmin Joo

Abstract:

Automatic detection of lumbar vertebrae and classification method is proposed for evaluating the degree of disc narrowing. Prior to classification, deep learning based segmentation is applied to detect individual lumbar vertebra. M-net is applied to segment five lumbar vertebrae and fine-tuning segmentation is employed to improve the accuracy of segmentation. Using the features extracted from previous step, clustering technique, k-means clustering, is applied to estimate the degree of disc space narrowing under four grade scoring system. As preliminary study, techniques proposed in this research could help building an automatic scoring system to diagnose the severity of disc narrowing from X-ray images.

Keywords: Disc space narrowing, Degenerative disc disorders, Deep learning based segmentation, Clustering technique

Procedia PDF Downloads 124
3868 One-Shot Text Classification with Multilingual-BERT

Authors: Hsin-Yang Wang, K. M. A. Salam, Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu Kao

Abstract:

Detecting user intent from natural language expression has a wide variety of use cases in different natural language processing applications. Recently few-shot training has a spike of usage on commercial domains. Due to the lack of significant sample features, the downstream task performance has been limited or leads to an unstable result across different domains. As a state-of-the-art method, the pre-trained BERT model gathering the sentence-level information from a large text corpus shows improvement on several NLP benchmarks. In this research, we are proposing a method to change multi-class classification tasks into binary classification tasks, then use the confidence score to rank the results. As a language model, BERT performs well on sequence data. In our experiment, we change the objective from predicting labels into finding the relations between words in sequence data. Our proposed method achieved 71.0% accuracy in the internal intent detection dataset and 63.9% accuracy in the HuffPost dataset. Acknowledgment: This work was supported by NCKU-B109-K003, which is the collaboration between National Cheng Kung University, Taiwan, and SoftBank Corp., Tokyo.

Keywords: OSML, BERT, text classification, one shot

Procedia PDF Downloads 99
3867 Interaction with Earth’s Surface in Remote Sensing

Authors: Spoorthi Sripad

Abstract:

Remote sensing is a powerful tool for acquiring information about the Earth's surface without direct contact, relying on the interaction of electromagnetic radiation with various materials and features. This paper explores the fundamental principle of "Interaction with Earth's Surface" in remote sensing, shedding light on the intricate processes that occur when electromagnetic waves encounter different surfaces. The absorption, reflection, and transmission of radiation generate distinct spectral signatures, allowing for the identification and classification of surface materials. The paper delves into the significance of the visible, infrared, and thermal infrared regions of the electromagnetic spectrum, highlighting how their unique interactions contribute to a wealth of applications, from land cover classification to environmental monitoring. The discussion encompasses the types of sensors and platforms used to capture these interactions, including multispectral and hyperspectral imaging systems. By examining real-world applications, such as land cover classification and environmental monitoring, the paper underscores the critical role of understanding the interaction with the Earth's surface for accurate and meaningful interpretation of remote sensing data.

Keywords: remote sensing, earth's surface interaction, electromagnetic radiation, spectral signatures, land cover classification, archeology and cultural heritage preservation

Procedia PDF Downloads 56
3866 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 176
3865 Analysis of Matching Pursuit Features of EEG Signal for Mental Tasks Classification

Authors: Zin Mar Lwin

Abstract:

Brain Computer Interface (BCI) Systems have developed for people who suffer from severe motor disabilities and challenging to communicate with their environment. BCI allows them for communication by a non-muscular way. For communication between human and computer, BCI uses a type of signal called Electroencephalogram (EEG) signal which is recorded from the human„s brain by means of an electrode. The electroencephalogram (EEG) signal is an important information source for knowing brain processes for the non-invasive BCI. Translating human‟s thought, it needs to classify acquired EEG signal accurately. This paper proposed a typical EEG signal classification system which experiments the Dataset from “Purdue University.” Independent Component Analysis (ICA) method via EEGLab Tools for removing artifacts which are caused by eye blinks. For features extraction, the Time and Frequency features of non-stationary EEG signals are extracted by Matching Pursuit (MP) algorithm. The classification of one of five mental tasks is performed by Multi_Class Support Vector Machine (SVM). For SVMs, the comparisons have been carried out for both 1-against-1 and 1-against-all methods.

Keywords: BCI, EEG, ICA, SVM

Procedia PDF Downloads 276
3864 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, Data Mining, Hadoop, MapReduce, MongoDB, NoSQL

Procedia PDF Downloads 158
3863 Heart Failure Identification and Progression by Classifying Cardiac Patients

Authors: Muhammad Saqlain, Nazar Abbas Saqib, Muazzam A. Khan

Abstract:

Heart Failure (HF) has become the major health problem in our society. The prevalence of HF has increased as the patient’s ages and it is the major cause of the high mortality rate in adults. A successful identification and progression of HF can be helpful to reduce the individual and social burden from this syndrome. In this study, we use a real data set of cardiac patients to propose a classification model for the identification and progression of HF. The data set has divided into three age groups, namely young, adult, and old and then each age group have further classified into four classes according to patient’s current physical condition. Contemporary Data Mining classification algorithms have been applied to each individual class of every age group to identify the HF. Decision Tree (DT) gives the highest accuracy of 90% and outperform all other algorithms. Our model accurately diagnoses different stages of HF for each age group and it can be very useful for the early prediction of HF.

Keywords: decision tree, heart failure, data mining, classification model

Procedia PDF Downloads 401
3862 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 74
3861 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach

Authors: Theertha Chandroth

Abstract:

This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.

Keywords: XML, JSON, data comparison, integration testing, Python, SQL

Procedia PDF Downloads 138
3860 Innovative Handloom Design Techniques- an Experimental Study Based on Primary Colour Gradation

Authors: Akanksha Pareek

Abstract:

The Indian Handloom clusters are known for its tradition and heritage of excellent craftsmanship. The design development of Indian handloom clusters are oriented on traditionally dobby and jacquard design. This comprehensive paper proposes practises on handloom woven design based on primary colour gradation with the help of basic weaved on four shaft. The innovative design ideas are inspired from Nature and transferred into the handloom samples to achieve colour gradation with primary colours. In this paper, design methodology where in woven samples are strategically designed in such way that traditional knowledge of the weavers will be oriented to leveraged their skills.

Keywords: handloom, weaving, colour gradation, shaft

Procedia PDF Downloads 616
3859 Common Orthodontic Indices and Classification in the United Kingdom

Authors: Ashwini Mohan, Haris Batley

Abstract:

An orthodontic index is used to rate or categorise an individual’s occlusion using a numeric or alphanumeric score. Indexing of malocclusions and their correction is important in epidemiology, diagnosis, communication between clinicians as well as their patients and assessing treatment outcomes. Many useful indices have been put forward, but to the author’s best knowledge, no one method to this day appears to be equally suitable for the use of epidemiologists, public health program planners and clinicians. This article describes the common clinical orthodontic indices and classifications used in United Kingdom.

Keywords: classification, indices, orthodontics, validity

Procedia PDF Downloads 150
3858 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals

Authors: Naser Safdarian, Nader Jafarnia Dabanloo

Abstract:

In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.

Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition

Procedia PDF Downloads 453
3857 Domain-Specific Deep Neural Network Model for Classification of Abnormalities on Chest Radiographs

Authors: Nkechinyere Joy Olawuyi, Babajide Samuel Afolabi, Bola Ibitoye

Abstract:

This study collected a preprocessed dataset of chest radiographs and formulated a deep neural network model for detecting abnormalities. It also evaluated the performance of the formulated model and implemented a prototype of the formulated model. This was with the view to developing a deep neural network model to automatically classify abnormalities in chest radiographs. In order to achieve the overall purpose of this research, a large set of chest x-ray images were sourced for and collected from the CheXpert dataset, which is an online repository of annotated chest radiographs compiled by the Machine Learning Research Group, Stanford University. The chest radiographs were preprocessed into a format that can be fed into a deep neural network. The preprocessing techniques used were standardization and normalization. The classification problem was formulated as a multi-label binary classification model, which used convolutional neural network architecture to make a decision on whether an abnormality was present or not in the chest radiographs. The classification model was evaluated using specificity, sensitivity, and Area Under Curve (AUC) score as the parameter. A prototype of the classification model was implemented using Keras Open source deep learning framework in Python Programming Language. The AUC ROC curve of the model was able to classify Atelestasis, Support devices, Pleural effusion, Pneumonia, A normal CXR (no finding), Pneumothorax, and Consolidation. However, Lung opacity and Cardiomegaly had a probability of less than 0.5 and thus were classified as absent. Precision, recall, and F1 score values were 0.78; this implies that the number of False Positive and False Negative is the same, revealing some measure of label imbalance in the dataset. The study concluded that the developed model is sufficient to classify abnormalities present in chest radiographs into present or absent.

Keywords: transfer learning, convolutional neural network, radiograph, classification, multi-label

Procedia PDF Downloads 125
3856 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications

Authors: K. P. Sandesh, M. H. Suman

Abstract:

Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.

Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms

Procedia PDF Downloads 517
3855 Backstepping Sliding Mode Control

Authors: Othmane Boughazi, Abdelmadjid Boumedienne, Hachemi Glaoui

Abstract:

This work treats the modeling and simulation of non-linear system behavior of an induction motor using backstepping sliding mode control. First, the direct field oriented control IM is derived. Then, a sliding for direct field oriented control is proposed to compensate the uncertainties, which occur in the control.Finally, the study of Backstepping sliding controls strategy of the induction motor drive. Our non linear system is simulated in MATLAB SIMULINK environment, the results obtained illustrate the efficiency of the proposed control with no overshoot, and the rising time is improved with good disturbances rejections comparing with the classical control law.

Keywords: induction motor, proportional-integral, sliding mode control, backstepping sliding mode control

Procedia PDF Downloads 484
3854 A Modeling Approach for Blockchain-Oriented Information Systems Design

Authors: Jiaqi Yan, Yani Shi

Abstract:

The blockchain technology is regarded as the most promising technology that has the potential to trigger a technological revolution. However, besides the bitcoin industry, we have not yet seen a large-scale application of blockchain in those domains that are supposed to be impacted, such as supply chain, financial network, and intelligent manufacturing. The reasons not only lie in the difficulties of blockchain implementation, but are also root in the challenges of blockchain-oriented information systems design. As the blockchain members are self-interest actors that belong to organizations with different existing information systems. As they expect different information inputs and outputs of the blockchain application, a common language protocol is needed to facilitate communications between blockchain members. Second, considering the decentralization of blockchain organization, there is not any central authority to organize and coordinate the business processes. Thus, the information systems built on blockchain should support more adaptive business process. This paper aims to address these difficulties by providing a modeling approach for blockchain-oriented information systems design. We will investigate the information structure of distributed-ledger data with conceptual modeling techniques and ontology theories, and build an effective ontology mapping method for the inter-organization information flow and blockchain information records. Further, we will study the distributed-ledger-ontology based business process modeling to support adaptive enterprise on blockchain.

Keywords: blockchain, ontology, information systems modeling, business process

Procedia PDF Downloads 449
3853 Theoretical Discussion on the Classification of Risks in Supply Chain Management

Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite

Abstract:

The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.

Keywords: sisks classification, survey, supply chain management, theoretical discussion

Procedia PDF Downloads 631
3852 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 154
3851 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 69
3850 The Impacts of Gentrification in Transit-Oriented Development on Mode Choice and Equity

Authors: Steve Apell

Abstract:

Transit-oriented development (TOD) is a popular intervention for local governments endeavoring to reduce auto-dependency and the adverse effects of sprawl. At the same time, American households such as the millennial generation, are shifting their residential preferences from the suburbs to the central city. These changes have intensified demand for TOD housing which generates high rents. This leads to displacement of low-income, transit-dependent households by more affluent middle class families. Critics argue that, the effectiveness of TOD might be compromised as newer affluent residents drive more and use transit less. However, there has not been a comprehensive study to test this hypothesis. Using census data ( 1990 – 2012) from six metropolitans areas, this research investigated if block groups within one-mile radius of TOD are gentrifying. Our findings reveal that the price of housing and number of college graduates, increased more in TODs compared to the metropolitan area. Similarly, the percentage of immigrants increased in TOD, while those of blacks and whites declined. Most importantly, TOD residents generally commuted less by car, while transit use increased in some metropolitan areas. TOD in the south of the United States registered higher cost of housing and less transit use. These findings have significant implications for the future of equitable and sustainable transportation policy.

Keywords: commuting, equity, gentrification, mode choice, transit oriented development

Procedia PDF Downloads 369