Search results for: automatic classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2864

Search results for: automatic classification

2264 Temporality in Architecture and Related Knowledge

Authors: Gonca Z. Tuncbilek

Abstract:

Architectural research tends to define architecture in terms of its permanence. In this study, the term ‘temporality’ and its use in architectural discourse is re-visited. The definition, proposition, and efficacy of the temporality occur both in architecture and in its related knowledge. The temporary architecture not only fulfills the requirement of the architectural programs, but also plays a significant role in generating an environment of architectural discourse. In recent decades, there is a great interest on the temporary architectural practices regarding to the installations, exhibition spaces, pavilions, and expositions; inviting the architects to experience and think about architecture. The temporary architecture has a significant role among the architecture, the architect, and the architectural discourse. Experiencing the contemporary materials, methods and technique; they have proposed the possibilities of the future architecture. These structures give opportunities to the architects to a wide-ranging variety of freedoms to experience the ‘new’ in architecture. In addition to this experimentation, they can be considered as an agent to redefine and reform the boundaries of the architectural discipline itself. Although the definition of architecture is re-analyzed in terms of its temporality rather than its permanence; architecture, in reality, still relies on historically codified types and principles of the formation. The concept of type can be considered for several different sciences, and there is a tendency to organize and understand the world in terms of classification in many different cultures and places. ‘Type’ is used as a classification tool with/without the scope of the critical invention. This study considers theories of type, putting forward epistemological and discursive arguments related to the form of architecture, being related to historical and formal disciplinary knowledge in architecture. This study has been to emphasize the importance of the temporality in architecture as a creative tool to reveal the position within the architectural discourse. The temporary architecture offers ‘new’ opportunities in the architectural field to be analyzed. In brief, temporary structures allow the architect freedoms to the experimentation in architecture. While redefining the architecture in terms of temporality, architecture still relies on historically codified types (pavilions, exhibitions, expositions, and installations). The notion of architectural types and its varying interpretations are analyzed based on the texts of architectural theorists since the Age of Enlightenment. Investigating the classification of type in architecture particularly temporary architecture, it is necessary to return to the discussion of the origin of the knowledge and its classification.

Keywords: classification of architecture, exhibition design, pavilion design, temporary architecture

Procedia PDF Downloads 363
2263 Roof Material Detection Based on Object-Based Approach Using WorldView-2 Satellite Imagery

Authors: Ebrahim Taherzadeh, Helmi Z. M. Shafri, Kaveh Shahi

Abstract:

One of the most important tasks in urban area remote sensing is detection of impervious surface (IS), such as building roof and roads. However, detection of IS in heterogeneous areas still remains as one of the most challenging works. In this study, detection of concrete roof using an object-oriented approach was proposed. A new rule-based classification was developed to detect concrete roof tile. The proposed rule-based classification was applied to WorldView-2 image. Results showed that the proposed rule has good potential to predict concrete roof material from WorldView-2 images with 85% accuracy.

Keywords: object-based, roof material, concrete tile, WorldView-2

Procedia PDF Downloads 418
2262 Global Positioning System Match Characteristics as a Predictor of Badminton Players’ Group Classification

Authors: Yahaya Abdullahi, Ben Coetzee, Linda Van Den Berg

Abstract:

The study aimed at establishing the global positioning system (GPS) determined singles match characteristics that act as predictors of successful and less-successful male singles badminton players’ group classification. Twenty-two (22) male single players (aged: 23.39 ± 3.92 years; body stature: 177.11 ± 3.06cm; body mass: 83.46 ± 14.59kg) who represented 10 African countries participated in the study. Players were categorised as successful and less-successful players according to the results of five championships’ of the 2014/2015 season. GPS units (MinimaxX V4.0), Polar Heart Rate Transmitter Belts and digital video cameras were used to collect match data. GPS-related variables were corrected for match duration and independent t-tests, a cluster analysis and a binary forward stepwise logistic regression were calculated. A Receiver Operating Characteristic Curve (ROC) was used to determine the validity of the group classification model. High-intensity accelerations per second were identified as the only GPS-determined variable that showed a significant difference between groups. Furthermore, only high-intensity accelerations per second (p=0.03) and low-intensity efforts per second (p=0.04) were identified as significant predictors of group classification with 76.88% of players that could be classified back into their original groups by making use of the GPS-based logistic regression formula. The ROC showed a value of 0.87. The identification of the last-mentioned GPS-related variables for the attainment of badminton performances, emphasizes the importance of using badminton drills and conditioning techniques to not only improve players’ physical fitness levels but also their abilities to accelerate at high intensities.

Keywords: badminton, global positioning system, match analysis, inertial movement analysis, intensity, effort

Procedia PDF Downloads 186
2261 Automatic Assignment of Geminate and Epenthetic Vowel for Amharic Text-to-Speech System

Authors: Tadesse Anberbir, Bankole Felix, Tomio Takara

Abstract:

In the development of a text-to-speech synthesizer, automatic derivation of correct pronunciation from the grapheme form of a text is a central problem. Particularly deriving phonological features which are not shown in orthography is challenging. In the Amharic language, geminates and epenthetic vowels are very crucial for proper pronunciation, but neither is shown in orthography. In this paper, to proposed and integrated a morphological analyzer into an Amharic Text-to-Speech system, mainly to predict geminates and epenthetic vowel positions and prepared a duration modeling method. Amharic Text-to-Speech system (AmhTTS) is a parametric and rule-based system that adopts a cepstral method and uses a source filter model for speech production and a Log Magnitude Approximation (LMA) filter as the vocal tract filter. The naturalness of the system after employing the duration modeling was evaluated by sentence listening test, and we achieved an average Mean Opinion Score (MOS) 3.4 (68%), which is moderate. By modeling the duration of geminates and controlling the locations of epenthetic vowel, we are able to synthesize good quality speech. Our system is mainly suitable to be customized for other Ethiopian languages with limited resources.

Keywords: amharic, gemination, Speech synthesis, morphology, epenthesis

Procedia PDF Downloads 79
2260 Automatic Teller Machine System Security by Using Mobile SMS Code

Authors: Husnain Mushtaq, Mary Anjum, Muhammad Aleem

Abstract:

The main objective of this paper is used to develop a high security in Automatic Teller Machine (ATM). In these system bankers will collect the mobile numbers from the customers and then provide a code on their mobile number. In most country existing ATM machine use the magnetic card reader. The customer is identifying by inserting an ATM card with magnetic card that hold unique information such as card number and some security limitations. By entering a personal identification number, first the customer is authenticated then will access bank account in order to make cash withdraw or other services provided by the bank. Cases of card fraud are another problem once the user’s bank card is missing and the password is stolen, or simply steal a customer’s card & PIN the criminal will draw all cash in very short time, which will being great financial losses in customer, this type of fraud has increase worldwide. So to resolve this problem we are going to provide the solution using “Mobile SMS code” and ATM “PIN code” in order to improve the verify the security of customers using ATM system and confidence in the banking area.

Keywords: PIN, inquiry, biometric, magnetic strip, iris recognition, face recognition

Procedia PDF Downloads 359
2259 Revisiting the Swadesh Wordlist: How Long Should It Be

Authors: Feda Negesse

Abstract:

One of the most important indicators of research quality is a good data - collection instrument that can yield reliable and valid data. The Swadesh wordlist has been used for more than half a century for collecting data in comparative and historical linguistics though arbitrariness is observed in its application and size. This research compare s the classification results of the 100 Swadesh wordlist with those of its subsets to determine if reducing the size of the wordlist impact s its effectiveness. In the comparison, the 100, 50 and 40 wordlists were used to compute lexical distances of 29 Cushitic and Semitic languages spoken in Ethiopia and neighbouring countries. Gabmap, a based application, was employed to compute the lexical distances and to divide the languages into related clusters. The study shows that the subsets are not as effective as the 100 wordlist in clustering languages into smaller subgroups but they are equally effective in di viding languages into bigger groups such as subfamilies. It is noted that the subsets may lead to an erroneous classification whereby unrelated languages by chance form a cluster which is not attested by a comparative study. The chance to get a wrong result is higher when the subsets are used to classify languages which are not closely related. Though a further study is still needed to settle the issues around the size of the Swadesh wordlist, this study indicates that the 50 and 40 wordlists cannot be recommended as reliable substitute s for the 100 wordlist under all circumstances. The choice seems to be determined by the objective of a researcher and the degree of affiliation among the languages to be classified.

Keywords: classification, Cushitic, Swadesh, wordlist

Procedia PDF Downloads 293
2258 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 144
2257 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 340
2256 Predictive Analytics of Student Performance Determinants

Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi

Abstract:

Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.

Keywords: student performance, supervised machine learning, classification, cross-validation, prediction

Procedia PDF Downloads 119
2255 Deep Learning Approach to Trademark Design Code Identification

Authors: Girish J. Showkatramani, Arthi M. Krishna, Sashi Nareddi, Naresh Nula, Aaron Pepe, Glen Brown, Greg Gabel, Chris Doninger

Abstract:

Trademark examination and approval is a complex process that involves analysis and review of the design components of the marks such as the visual representation as well as the textual data associated with marks such as marks' description. Currently, the process of identifying marks with similar visual representation is done manually in United States Patent and Trademark Office (USPTO) and takes a considerable amount of time. Moreover, the accuracy of these searches depends heavily on the experts determining the trademark design codes used to catalog the visual design codes in the mark. In this study, we explore several methods to automate trademark design code classification. Based on recent successes of convolutional neural networks in image classification, we have used several different convolutional neural networks such as Google’s Inception v3, Inception-ResNet-v2, and Xception net. The study also looks into other techniques to augment the results from CNNs such as using Open Source Computer Vision Library (OpenCV) to pre-process the images. This paper reports the results of the various models trained on year of annotated trademark images.

Keywords: trademark design code, convolutional neural networks, trademark image classification, trademark image search, Inception-ResNet-v2

Procedia PDF Downloads 226
2254 Drugstore Control System Design and Realization Based on Programmable Logic Controller (PLC)

Authors: Muhammad Faheem Khakhi, Jian Yu Wang, Salman Muhammad, Muhammad Faisal Shabir

Abstract:

Population growth and Chinese two-child policy will boost pharmaceutical market, and it will continue to maintain the growth for a period of time in the future, the traditional pharmacy dispensary has been unable to meet the growing medical needs of the peoples. Under the strong support of the national policy, the automatic transformation of traditional pharmacies is the inclination of the Times, the new type of intelligent pharmacy system will continue to promote the development of the pharmaceutical industry. Under this background, based on PLC control, the paper proposed an intelligent storage and automatic drug delivery system; complete design of the lower computer's control system and the host computer's software system has been present. The system can be applied to dispensing work for Chinese herbal medicinal and Western medicines. Firstly, the essential of intelligent control system for pharmacy is discussed. After the analysis of the requirements, the overall scheme of the system design is presented. Secondly, introduces the software and hardware design of the lower computer's control system, including the selection of PLC and the selection of motion control system, the problem of the human-computer interaction module and the communication between PC and PLC solves, the program design and development of the PLC control system is completed. The design of the upper computer software management system is described in detail. By analyzing of E-R diagram, built the establish data, the communication protocol between systems is customize, C++ Builder is adopted to realize interface module, supply module, main control module, etc. The paper also gives the implementations of the multi-threaded system and communication method. Lastly, each module of the lower computer control system is tested. Then, after building a test environment, the function test of the upper computer software management system is completed. On this basis, the entire control system accepts the overall test.

Keywords: automatic pharmacy, PLC, control system, management system, communication

Procedia PDF Downloads 302
2253 Traffic Density Measurement by Automatic Detection of the Vehicles Using Gradient Vectors from Aerial Images

Authors: Saman Ghaffarian, Ilgin Gökaşar

Abstract:

This paper presents a new automatic vehicle detection method from very high resolution aerial images to measure traffic density. The proposed method starts by extracting road regions from image using road vector data. Then, the road image is divided into equal sections considering resolution of the images. Gradient vectors of the road image are computed from edge map of the corresponding image. Gradient vectors on the each boundary of the sections are divided where the gradient vectors significantly change their directions. Finally, number of vehicles in each section is carried out by calculating the standard deviation of the gradient vectors in each group and accepting the group as vehicle that has standard deviation above predefined threshold value. The proposed method was tested in four very high resolution aerial images acquired from Istanbul, Turkey which illustrate roads and vehicles with diverse characteristics. The results show the reliability of the proposed method in detecting vehicles by producing 86% overall F1 accuracy value.

Keywords: aerial images, intelligent transportation systems, traffic density measurement, vehicle detection

Procedia PDF Downloads 374
2252 The Mineralogy of Shales from the Pilbara and How Chemical Weathering Affects the Intact Strength

Authors: Arturo Maldonado

Abstract:

In the iron ore mining industry, the intact strength of rock units is defined using the uniaxial compressive strength (UCS). This parameter is very important for the classification of shale materials, allowing the split between rock and cohesive soils based on the magnitude of UCS. For this research, it is assumed that UCS less than or equal to 1 MPa is representative of soils. Several researchers have anticipated that the magnitude of UCS reduces with weathering progression, also since UCS is a directional property, its magnitude depends upon the rock fabric orientation. Thus, the paper presents how the UCS of shales is affected by both weathering grade and bedding orientation. The mineralogy of shales has been defined using Hyper-spectral and chemical assays to define the mineral constituents of shale and other non-shale materials. Geological classification tools have been used to define distinct lithological types, and in this manner, the author uses mineralogical datasets to recognize and isolate shales from other rock types and develop tertiary plots for fresh and weathered shales. The mineralogical classification of shales has reduced the contamination of lithology types and facilitated the study of the physical factors affecting the intact strength of shales, like anisotropic strength due to bedding orientation. The analysis of mineralogical characteristics of shales is perhaps the most important contribution of this paper to other researchers who may wish to explore similar methods.

Keywords: rock mechanics, mineralogy, shales, weathering, anisotropy

Procedia PDF Downloads 50
2251 Proposal for a Web System for the Control of Fungal Diseases in Grapes in Fruits Markets

Authors: Carlos Tarmeño Noriega, Igor Aguilar Alonso

Abstract:

Fungal diseases are common in vineyards; they cause a decrease in the quality of the products that can be sold, generating distrust of the customer towards the seller when buying fruit. Currently, technology allows the classification of fruits according to their characteristics thanks to artificial intelligence. This study proposes the implementation of a control system that allows the identification of the main fungal diseases present in the Italia grape, making use of a convolutional neural network (CNN), OpenCV, and TensorFlow. The methodology used was based on a collection of 20 articles referring to the proposed research on quality control, classification, and recognition of fruits through artificial vision techniques.

Keywords: computer vision, convolutional neural networks, quality control, fruit market, OpenCV, TensorFlow

Procedia PDF Downloads 76
2250 Assessing the Current State of Wheelchair Accessibility in Shopping Centers and Stores in Saudi Arabia

Authors: Majed M. Mustafa, Abdulrahman A. Altassan

Abstract:

In recent years, ensuring accessibility for all individuals, particularly those with mobility impairments, has gained significant attention in Saudi Arabia. This research aims to evaluate wheelchair accessibility in shopping centers, malls, and stores across the kingdom, highlighting its critical role in promoting inclusivity and equal access. The study will focus on the availability and quality of ramps, automatic doors, lifts, accessible restrooms, and overall ease of navigation for wheelchair users. Utilizing a mixed-methods approach, the research will employ site assessments, user surveys, and interviews with facility managers to gather comprehensive data. Preliminary findings indicate that while some facilities have made strides in accessibility, there are still numerous areas requiring improvement. The study will provide targeted recommendations to enhance accessibility, ensuring that all users can navigate shopping environments with ease and dignity. Conclusively, this research underscores the need for continuous efforts and policy enhancements to achieve universal design standards in public spaces within Saudi Arabia.

Keywords: automatic doors, equal access, ramp quality, wheelchair accessibility

Procedia PDF Downloads 26
2249 Measurement of Susceptibility Users Using Email Phishing Attack

Authors: Cindy Sahera, Sarwono Sutikno

Abstract:

Rapid technological developments also have negative impacts, namely the increasing criminal cases based on technology or cybercrime. One technique that can be used to conduct cybercrime attacks are phishing email. The issue is whether the user is aware that email can be misused by others so that it can harm the user's own? This research was conducted to measure the susceptibility of selected targets against email abuse. The objectives of this research are measurement of targets’ susceptibility and find vulnerability in email recipient. There are three steps being taken in this research, (1) the information gathering phase, (2) the design phase, and (3) the execution phase. The first step includes the collection of the information necessary to carry out an attack on a target. The next step is to make the design of an attack against a target. The last step is to send phishing emails to the target. The levels of susceptibility are three: level 1, level 2 and level 3. Level 1 indicates a low level of targets’ susceptibility, level 2 indicates the intermediate level of targets’ susceptibility, and level 3 indicates a high level of targets’ susceptibility. The results showed that users who are on level 1 and level 2 more that level 3, which means the user is not too careless. However, it does not mean the user to be safe. There are still vulnerabilities that may occur, such as automatic location detection when opening emails and automatic downloaded malware as user clicks a link in the email.

Keywords: cybercrime, email phishing, susceptibility, vulnerability

Procedia PDF Downloads 279
2248 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data

Authors: Ruchika Malhotra, Megha Khanna

Abstract:

The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.

Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics

Procedia PDF Downloads 415
2247 Monitoring of Cannabis Cultivation with High-Resolution Images

Authors: Levent Basayigit, Sinan Demir, Burhan Kara, Yusuf Ucar

Abstract:

Cannabis is mostly used for drug production. In some countries, an excessive amount of illegal cannabis is cultivated and sold. Most of the illegal cannabis cultivation occurs on the lands far from settlements. In farmlands, it is cultivated with other crops. In this method, cannabis is surrounded by tall plants like corn and sunflower. It is also cultivated with tall crops as the mixed culture. The common method of the determination of the illegal cultivation areas is to investigate the information obtained from people. This method is not sufficient for the determination of illegal cultivation in remote areas. For this reason, more effective methods are needed for the determination of illegal cultivation. Remote Sensing is one of the most important technologies to monitor the plant growth on the land. The aim of this study is to monitor cannabis cultivation area using satellite imagery. The main purpose of this study was to develop an applicable method for monitoring the cannabis cultivation. For this purpose, cannabis was grown as single or surrounded by the corn and sunflower in plots. The morphological characteristics of cannabis were recorded two times per month during the vegetation period. The spectral signature library was created with the spectroradiometer. The parcels were monitored with high-resolution satellite imagery. With the processing of satellite imagery, the cultivation areas of cannabis were classified. To separate the Cannabis plots from the other plants, the multiresolution segmentation algorithm was found to be the most successful for classification. WorldView Improved Vegetative Index (WV-VI) classification was the most accurate method for monitoring the plant density. As a result, an object-based classification method and vegetation indices were sufficient for monitoring the cannabis cultivation in multi-temporal Earthwiev images.

Keywords: Cannabis, drug, remote sensing, object-based classification

Procedia PDF Downloads 266
2246 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 430
2245 2D Point Clouds Features from Radar for Helicopter Classification

Authors: Danilo Habermann, Aleksander Medella, Carla Cremon, Yusef Caceres

Abstract:

This paper aims to analyze the ability of 2d point clouds features to classify different models of helicopters using radars. This method does not need to estimate the blade length, the number of blades of helicopters, and the period of their micro-Doppler signatures. It is also not necessary to generate spectrograms (or any other image based on time and frequency domain). This work transforms a radar return signal into a 2D point cloud and extracts features of it. Three classifiers are used to distinguish 9 different helicopter models in order to analyze the performance of the features used in this work. The high accuracy obtained with each of the classifiers demonstrates that the 2D point clouds features are very useful for classifying helicopters from radar signal.

Keywords: helicopter classification, point clouds features, radar, supervised classifiers

Procedia PDF Downloads 217
2244 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 165
2243 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 89
2242 Automatic Assignment of Geminate and Epenthetic Vowel for Amharic Text-to-Speech System

Authors: Tadesse Anberbir, Felix Bankole, Tomio Takara, Girma Mamo

Abstract:

In the development of a text-to-speech synthesizer, automatic derivation of correct pronunciation from the grapheme form of a text is a central problem. Particularly deriving phonological features which are not shown in orthography is challenging. In the Amharic language, geminates and epenthetic vowels are very crucial for proper pronunciation but neither is shown in orthography. In this paper, we proposed and integrated a morphological analyzer into an Amharic Text-to-Speech system, mainly to predict geminates and epenthetic vowel positions, and prepared a duration modeling method. Amharic Text-to-Speech system (AmhTTS) is a parametric and rule-based system that adopts a cepstral method and uses a source filter model for speech production and a Log Magnitude Approximation (LMA) filter as the vocal tract filter. The naturalness of the system after employing the duration modeling was evaluated by sentence listening test and we achieved an average Mean Opinion Score (MOS) 3.4 (68%) which is moderate. By modeling the duration of geminates and controlling the locations of epenthetic vowel, we are able to synthesize good quality speech. Our system is mainly suitable to be customized for other Ethiopian languages with limited resources.

Keywords: Amharic, gemination, speech synthesis, morphology, epenthesis

Procedia PDF Downloads 78
2241 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status

Authors: Rosa Figueroa, Christopher Flores

Abstract:

Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).

Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm

Procedia PDF Downloads 292
2240 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments

Authors: Ana Londral, Burcu Demiray, Marcus Cheetham

Abstract:

Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.

Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation

Procedia PDF Downloads 278
2239 A Novel Method for Face Detection

Authors: H. Abas Nejad, A. R. Teymoori

Abstract:

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.

Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model

Procedia PDF Downloads 333
2238 Multi-Layer Perceptron and Radial Basis Function Neural Network Models for Classification of Diabetic Retinopathy Disease Using Video-Oculography Signals

Authors: Ceren Kaya, Okan Erkaymaz, Orhan Ayar, Mahmut Özer

Abstract:

Diabetes Mellitus (Diabetes) is a disease based on insulin hormone disorders and causes high blood glucose. Clinical findings determine that diabetes can be diagnosed by electrophysiological signals obtained from the vital organs. 'Diabetic Retinopathy' is one of the most common eye diseases resulting on diabetes and it is the leading cause of vision loss due to structural alteration of the retinal layer vessels. In this study, features of horizontal and vertical Video-Oculography (VOG) signals have been used to classify non-proliferative and proliferative diabetic retinopathy disease. Twenty-five features are acquired by using discrete wavelet transform with VOG signals which are taken from 21 subjects. Two models, based on multi-layer perceptron and radial basis function, are recommended in the diagnosis of Diabetic Retinopathy. The proposed models also can detect level of the disease. We show comparative classification performance of the proposed models. Our results show that proposed the RBF model (100%) results in better classification performance than the MLP model (94%).

Keywords: diabetic retinopathy, discrete wavelet transform, multi-layer perceptron, radial basis function, video-oculography (VOG)

Procedia PDF Downloads 252
2237 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 143
2236 6D Posture Estimation of Road Vehicles from Color Images

Authors: Yoshimoto Kurihara, Tad Gonsalves

Abstract:

Currently, in the field of object posture estimation, there is research on estimating the position and angle of an object by storing a 3D model of the object to be estimated in advance in a computer and matching it with the model. However, in this research, we have succeeded in creating a module that is much simpler, smaller in scale, and faster in operation. Our 6D pose estimation model consists of two different networks – a classification network and a regression network. From a single RGB image, the trained model estimates the class of the object in the image, the coordinates of the object, and its rotation angle in 3D space. In addition, we compared the estimation accuracy of each camera position, i.e., the angle from which the object was captured. The highest accuracy was recorded when the camera position was 75°, the accuracy of the classification was about 87.3%, and that of regression was about 98.9%.

Keywords: 6D posture estimation, image recognition, deep learning, AlexNet

Procedia PDF Downloads 148
2235 An Artificial Intelligence Supported QUAL2K Model for the Simulation of Various Physiochemical Parameters of Water

Authors: Mehvish Bilal, Navneet Singh, Jasir Mushtaq

Abstract:

Water pollution puts people's health at risk, and it can also impact the ecology. For practitioners of integrated water resources management (IWRM), water quality modelling may be useful for informing decisions about pollution control (such as discharge permitting) or demand management (such as abstraction permitting). To comprehend the current pollutant load, movement of effective load movement of contaminants generates effective relation between pollutants, mathematical simulation, source, and water quality is regarded as one of the best estimating tools. The current study involves the Qual2k model, which includes manual simulation of the various physiochemical characteristics of water. To this end, various sensors could be installed for the automatic simulation of various physiochemical characteristics of water. An artificial intelligence model has been proposed for the automatic simulation of water quality parameters. Models of water quality have become an effective tool for identifying worldwide water contamination, as well as the ultimate fate and behavior of contaminants in the water environment. Water quality model research is primarily conducted in Europe and other industrialized countries in the first world, where theoretical underpinnings and practical research are prioritized.

Keywords: artificial intelligence, QUAL2K, simulation, physiochemical parameters

Procedia PDF Downloads 91