Search results for: climatic classification
2176 Improved Rare Species Identification Using Focal Loss Based Deep Learning Models
Authors: Chad Goldsworthy, B. Rajeswari Matam
Abstract:
The use of deep learning for species identification in camera trap images has revolutionised our ability to study, conserve and monitor species in a highly efficient and unobtrusive manner, with state-of-the-art models achieving accuracies surpassing the accuracy of manual human classification. The high imbalance of camera trap datasets, however, results in poor accuracies for minority (rare or endangered) species due to their relative insignificance to the overall model accuracy. This paper investigates the use of Focal Loss, in comparison to the traditional Cross Entropy Loss function, to improve the identification of minority species in the “255 Bird Species” dataset from Kaggle. The results show that, although Focal Loss slightly decreased the accuracy of the majority species, it was able to increase the F1-score by 0.06 and improve the identification of the bottom two, five and ten (minority) species by 37.5%, 15.7% and 10.8%, respectively, as well as resulting in an improved overall accuracy of 2.96%.Keywords: convolutional neural networks, data imbalance, deep learning, focal loss, species classification, wildlife conservation
Procedia PDF Downloads 1892175 Spatial Data Mining by Decision Trees
Authors: Sihem Oujdi, Hafida Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining
Procedia PDF Downloads 6112174 Farmers’ Perception and Response to Climate Change Across Agro-ecological Zones in Conflict-Ridden Communities in Cameroon
Authors: Lotsmart Fonjong
Abstract:
The livelihood of rural communities in the West African state of Cameroon, which is largely dictated by natural forces (rainfall, temperatures, and soil), is today threatened by climate change and armed conflict. This paper investigates the extent to which rural communities are aware of climate change, how their perceptions of changes across different agro-ecological zones have impacted farming practices, output, and lifestyles, on the one hand, and the extent to which local armed conflicts are confounding their efforts and adaptation abilities. The paper is based on a survey conducted among small farmers in selected localities within the forest and savanna ecological zones of the conflict-ridden Northwest and Southwest Cameroon. Attention is paid to farmers’ gender, scale, and type of farming. Farmers’ perception of/and response to climate change are analysed alongside local rainfall and temperature data and mobilization for climate justice. Findings highlight the fact that farmers’ perception generally corroborates local climatic data. Climatic instability has negatively affected farmers’ output, food prices, standards of living, and food security. However, the vulnerability of the population varies across ecological zones, gender, and crop types. While these factors also account for differences in local response and adaptation to climate change, ongoing armed conflicts in these regions have further complicated opportunities for climate-driven agricultural innovations, inputs, and exchange of information among farmers. This situation underlines how poor communities, as victims, are forced into many complex problems outsider their making. It is therefore important to mainstream farmers’ perceptions and differences into policy strategies that consider both climate change and Anglophone conflict as national security concerns foe sustainable development in Cameroon.Keywords: adaptation policies, climate change, conflict, small farmers, cameroon
Procedia PDF Downloads 1552173 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis
Authors: R. Periyasamy, Deepak Joshi, Sneh Anand
Abstract:
Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis
Procedia PDF Downloads 4952172 Modified Naive Bayes-Based Prediction Modeling for Crop Yield Prediction
Authors: Kefaya Qaddoum
Abstract:
Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.Keywords: tomato yield prediction, naive Bayes, redundancy, WSG
Procedia PDF Downloads 2302171 Earthquake Classification in Molluca Collision Zone Using Conventional Statistical Methods
Authors: H. J. Wattimanela, U. S. Passaribu, A. N. T. Puspito, S. W. Indratno
Abstract:
Molluca Collision Zone is located at the junction of the Eurasian plate, Australian, Pacific, and the Philippines. Between the Sangihe arc, west of the collision zone, and to the east of Halmahera arc is active collision and convex toward the Molluca Sea. This research will analyze the behavior of earthquake occurrence in Molluca Collision Zone related to the distributions of an earthquake in each partition regions, determining the type of distribution of a occurrence earthquake of partition regions, and the mean occurrence of earthquakes each partition regions, and the correlation between the partitions region. We calculate number of earthquakes using partition method and its behavioral using conventional statistical methods. The data used is the data type of shallow earthquakes with magnitudes ≥ 4 SR for the period 1964-2013 in the Molluca Collision Zone. From the results, we can classify partitioned regions based on the correlation into two classes: strong and very strong. This classification can be used for early warning system in disaster management.Keywords: molluca collision zone, partition regions, conventional statistical methods, earthquakes, classifications, disaster management
Procedia PDF Downloads 4952170 Sustainability Analysis and Quality Assessment of Rainwater Harvested from Green Roofs: A Review
Authors: Mst. Nilufa Sultana, Shatirah Akib, Muhammad Aqeel Ashraf, Mohamed Roseli Zainal Abidin
Abstract:
Most people today are aware that global Climate change, is not just a scientific theory but also a fact with worldwide consequences. Global climate change is due to rapid urbanization, industrialization, high population growth and current vulnerability of the climatic condition. Water is becoming scarce as a result of global climate change. To mitigate the problem arising due to global climate change and its drought effect, harvesting rainwater from green roofs, an environmentally-friendly and versatile technology, is becoming one of the best assessment criteria and gaining attention in Malaysia. This paper addresses the sustainability of green roofs and examines the quality of water harvested from green roofs in comparison to rainwater. The factors that affect the quality of such water, taking into account, for example, roofing materials, climatic conditions, the frequency of rainfall frequency and the first flush. A green roof was installed on the Humid Tropic Centre (HTC) is a place of the study on monitoring program for urban Stormwater Management Manual for Malaysia (MSMA), Eco-Hydrological Project in Kualalumpur, and the rainwater was harvested and evaluated on the basis of four parameters i.e., conductivity, dissolved oxygen (DO), pH and temperature. These parameters were found to fall between Class I and Class III of the Interim National Water Quality Standards (INWQS) and the Water Quality Index (WQI). Some preliminary treatment such as disinfection and filtration could likely to improve the value of these parameters to class I. This review paper clearly indicates that there is a need for more research to address other microbiological and chemical quality parameters to ensure that the harvested water is suitable for use potable water for domestic purposes. The change in all physical, chemical and microbiological parameters with respect to storage time will be a major focus of future studies in this field.Keywords: Green roofs, INWQS, MSMA-SME, rainwater harvesting, water treatment, water quality parameter, WQI
Procedia PDF Downloads 5332169 Distangling Biological Noise in Cellular Images with a Focus on Explainability
Authors: Manik Sharma, Ganapathy Krishnamurthi
Abstract:
The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.Keywords: cellular images, genetic perturbations, deep-learning, explainability
Procedia PDF Downloads 1102168 Detection and Classification of Rubber Tree Leaf Diseases Using Machine Learning
Authors: Kavyadevi N., Kaviya G., Gowsalya P., Janani M., Mohanraj S.
Abstract:
Hevea brasiliensis, also known as the rubber tree, is one of the foremost assets of crops in the world. One of the most significant advantages of the Rubber Plant in terms of air oxygenation is its capacity to reduce the likelihood of an individual developing respiratory allergies like asthma. To construct such a system that can properly identify crop diseases and pests and then create a database of insecticides for each pest and disease, we must first give treatment for the illness that has been detected. We shall primarily examine three major leaf diseases since they are economically deficient in this article, which is Bird's eye spot, algal spot and powdery mildew. And the recommended work focuses on disease identification on rubber tree leaves. It will be accomplished by employing one of the superior algorithms. Input, Preprocessing, Image Segmentation, Extraction Feature, and Classification will be followed by the processing technique. We will use time-consuming procedures that they use to detect the sickness. As a consequence, the main ailments, underlying causes, and signs and symptoms of diseases that harm the rubber tree are covered in this study.Keywords: image processing, python, convolution neural network (CNN), machine learning
Procedia PDF Downloads 762167 Classifications of Sleep Apnea (Obstructive, Central, Mixed) and Hypopnea Events Using Wavelet Packet Transform and Support Vector Machines (VSM)
Authors: Benghenia Hadj Abd El Kader
Abstract:
Sleep apnea events as obstructive, central, mixed or hypopnea are characterized by frequent breathing cessations or reduction in upper airflow during sleep. An advanced method for analyzing the patterning of biomedical signals to recognize obstructive sleep apnea and hypopnea is presented. In the aim to extract characteristic parameters, which will be used for classifying the above stated (obstructive, central, mixed) sleep apnea and hypopnea, the proposed method is based first on the analysis of polysomnography signals such as electrocardiogram signal (ECG) and electromyogram (EMG), then classification of the (obstructive, central, mixed) sleep apnea and hypopnea. The analysis is carried out using the wavelet transform technique in order to extract characteristic parameters whereas classification is carried out by applying the SVM (support vector machine) technique. The obtained results show good recognition rates using characteristic parameters.Keywords: obstructive, central, mixed, sleep apnea, hypopnea, ECG, EMG, wavelet transform, SVM classifier
Procedia PDF Downloads 3702166 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model
Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech
Abstract:
Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM
Procedia PDF Downloads 1342165 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1032164 Global Solar Irradiance: Data Imputation to Analyze Complementarity Studies of Energy in Colombia
Authors: Jeisson A. Estrella, Laura C. Herrera, Cristian A. Arenas
Abstract:
The Colombian electricity sector has been transforming through the insertion of new energy sources to generate electricity, one of them being solar energy, which is being promoted by companies interested in photovoltaic technology. The study of this technology is important for electricity generation in general and for the planning of the sector from the perspective of energy complementarity. Precisely in this last approach is where the project is located; we are interested in answering the concerns about the reliability of the electrical system when climatic phenomena such as El Niño occur or in defining whether it is viable to replace or expand thermoelectric plants. Reliability of the electrical system when climatic phenomena such as El Niño occur, or to define whether it is viable to replace or expand thermoelectric plants with renewable electricity generation systems. In this regard, some difficulties related to the basic information on renewable energy sources from measured data must first be solved, as these come from automatic weather stations. Basic information on renewable energy sources from measured data, since these come from automatic weather stations administered by the Institute of Hydrology, Meteorology and Environmental Studies (IDEAM) and, in the range of study (2005-2019), have significant amounts of missing data. For this reason, the overall objective of the project is to complete the global solar irradiance datasets to obtain time series to develop energy complementarity analyses in a subsequent project. Global solar irradiance data sets to obtain time series that will allow the elaboration of energy complementarity analyses in the following project. The filling of the databases will be done through numerical and statistical methods, which are basic techniques for undergraduate students in technical areas who are starting out as researchers technical areas who are starting out as researchers.Keywords: time series, global solar irradiance, imputed data, energy complementarity
Procedia PDF Downloads 702163 Machine Learning Approach for Yield Prediction in Semiconductor Production
Authors: Heramb Somthankar, Anujoy Chakraborty
Abstract:
This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis
Procedia PDF Downloads 1072162 Pattern Recognition Based on Simulation of Chemical Senses (SCS)
Authors: Nermeen El Kashef, Yasser Fouad, Khaled Mahar
Abstract:
No AI-complete system can model the human brain or behavior, without looking at the totality of the whole situation and incorporating a combination of senses. This paper proposes a Pattern Recognition model based on Simulation of Chemical Senses (SCS) for separation and classification of sign language. The model based on human taste controlling strategy. The main idea of the introduced model is motivated by the facts that the tongue cluster input substance into its basic tastes first, and then the brain recognizes its flavor. To implement this strategy, two level architecture is proposed (this is inspired from taste system). The separation-level of the architecture focuses on hand posture cluster, while the classification-level of the architecture to recognizes the sign language. The efficiency of proposed model is demonstrated experimentally by recognizing American Sign Language (ASL) data set. The recognition accuracy obtained for numbers of ASL is 92.9 percent.Keywords: artificial intelligence, biocybernetics, gustatory system, sign language recognition, taste sense
Procedia PDF Downloads 2912161 Unearthing Air Traffic Control Officers Decision Instructional Patterns From Simulator Data for Application in Human Machine Teams
Authors: Zainuddin Zakaria, Sun Woh Lye
Abstract:
Despite the continuous advancements in automated conflict resolution tools, there is still a low rate of adoption of automation from Air Traffic Control Officers (ATCOs). Trust or acceptance in these tools and conformance to the individual ATCO preferences in strategy execution for conflict resolution are two key factors that impact their use. This paper proposes a methodology to unearth and classify ATCO conflict resolution strategies from simulator data of trained and qualified ATCOs. The methodology involves the extraction of ATCO executive control actions and the establishment of a system of strategy resolution classification based on ATCO radar commands and prevailing flight parameters in deconflicting a pair of aircraft. Six main strategies used to handle various categories of conflict were identified and discussed. It was found that ATCOs were about twice more likely to choose only vertical maneuvers in conflict resolution compared to horizontal maneuvers or a combination of both vertical and horizontal maneuvers.Keywords: air traffic control strategies, conflict resolution, simulator data, strategy classification system
Procedia PDF Downloads 1472160 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter
Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai
Abstract:
A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS
Procedia PDF Downloads 842159 Analogy in Microclimatic Parameters, Chemometric and Phytonutrient Profiles of Cultivated and Wild Ecotypes of Origanum vulgare L., across Kashmir Himalaya
Authors: Sumira Jan, Javid Iqbal Mir, Desh Beer Singh, Anil Sharma, Shafia Zaffar Faktoo
Abstract:
Background and Aims: Climatic and edaphic factors immensely influence crop quality and proper development. Regardless of economic potential, Himalayan Oregano has not subjected to phytonutrient and chemometric evaluation and its relationship with environmental conditions are scarce. The central objective of this research was to investigate microclimatic variation among wild and cultivated populations located in a microclimatic gradient in north-western Himalaya, Kashmir and analyse if such disparity was related with diverse climatic and edaphic conditions. Methods: Micrometeorological, Atomic absorption spectroscopy for micro elemental analysis was carried for soil. HPLC was carried out to estimate variation in phytonutrients and phytochemicals. Results: Geographic variation in phytonutrient was observed among cultivated and wild populations and among populations diverse within regions. Cultivated populations exhibited comparatively lesser phytonutrient value than wild populations. Moreover, our results observed higher vegetative growth of O. vulgare L. with higher pH (6-7), elevated organic carbon (2.42%), high nitrogen (97.41Kg/ha) and manganese (10-12ppm) and zinc contents (0.39-0.50) produce higher phytonutrients. HPLC data of phytonutrients like quercetin, betacarotene, ascorbic acid, arbutin and catechin revealed direct relationship with UV-B flux (r2=0.82), potassium (r2=0.97) displaying parallel relationship with phytonutrient value. Conclusions: Catechin was found as predominant phytonutrient among all populations with maximum accumulation of 163.8 ppm while as quercetin exhibited lesser value. Maximum arbutin (53.42ppm) and quercetin (2.87ppm) accumulated in plants thriving under intense and high UV-B flux. Minimum variation was demonstrated by beta carotene and ascorbic acid.Keywords: phytonutrient, ascorbic acid, beta carotene, quercetin, catechin
Procedia PDF Downloads 2672158 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.Keywords: feature selection, LIWC, machine learning, politics
Procedia PDF Downloads 3812157 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 1762156 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1292155 Classification of Health Risk Factors to Predict the Risk of Falling in Older Adults
Authors: L. Lindsay, S. A. Coleman, D. Kerr, B. J. Taylor, A. Moorhead
Abstract:
Cognitive decline and frailty is apparent in older adults leading to an increased likelihood of the risk of falling. Currently health care professionals have to make professional decisions regarding such risks, and hence make difficult decisions regarding the future welfare of the ageing population. This study uses health data from The Irish Longitudinal Study on Ageing (TILDA), focusing on adults over the age of 50 years, in order to analyse health risk factors and predict the likelihood of falls. This prediction is based on the use of machine learning algorithms whereby health risk factors are used as inputs to predict the likelihood of falling. Initial results show that health risk factors such as long-term health issues contribute to the number of falls. The identification of such health risk factors has the potential to inform health and social care professionals, older people and their family members in order to mitigate daily living risks.Keywords: classification, falls, health risk factors, machine learning, older adults
Procedia PDF Downloads 1462154 Weed Classification Using a Two-Dimensional Deep Convolutional Neural Network
Authors: Muhammad Ali Sarwar, Muhammad Farooq, Nayab Hassan, Hammad Hassan
Abstract:
Pakistan is highly recognized for its agriculture and is well known for producing substantial amounts of wheat, cotton, and sugarcane. However, some factors contribute to a decline in crop quality and a reduction in overall output. One of the main factors contributing to this decline is the presence of weed and its late detection. This process of detection is manual and demands a detailed inspection to be done by the farmer itself. But by the time detection of weed, the farmer will be able to save its cost and can increase the overall production. The focus of this research is to identify and classify the four main types of weeds (Small-Flowered Cranesbill, Chick Weed, Prickly Acacia, and Black-Grass) that are prevalent in our region’s major crops. In this work, we implemented three different deep learning techniques: YOLO-v5, Inception-v3, and Deep CNN on the same Dataset, and have concluded that deep convolutions neural network performed better with an accuracy of 97.45% for such classification. In relative to the state of the art, our proposed approach yields 2% better results. We devised the architecture in an efficient way such that it can be used in real-time.Keywords: deep convolution networks, Yolo, machine learning, agriculture
Procedia PDF Downloads 1162153 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3152152 Exploring the Influence of Climate Change on Food Behavior in Medieval France: A Multi-Method Analysis of Human-Animal Interactions
Authors: Unsain Dianne, Roussel Audrey, Goude Gwenaëlle, Magniez Pierre, Storå Jan
Abstract:
This paper aims to investigate the changes in husbandry practices and meat consumption during the transition from the Medieval Climate Anomaly to the Little Ice Age in the South of France. More precisely, we will investigate breeding strategies, animal size and health status, carcass exploitation strategies, and the impact of socioeconomic status on human-environment interactions. For that purpose, we will analyze faunal remains from ten sites equally distributed between the two periods. Those include consumers from different socio-economic backgrounds (peasants, city dwellers, soldiers, lords, and the Popes). The research will employ different methods used in zooarchaeology: comparative anatomy, biometry, pathologies analyses, traceology, and utility indices, as well as experimental archaeology, to reconstruct and understand the changes in animal breeding and consumption practices. Their analysis will allow the determination of modifications in the animal production chain, with the composition of the flocks (species, size), their management (age, sex, health status), culinary practices (strategies for the exploitation of carcasses, cooking, tastes) or the importance of trade (butchers, sales of processed animal products). The focus will also be on the social extraction of consumers. The aim will be to determine whether climate change has had a greater impact on the most modest groups (such as peasants), whether the consequences have been global and have also affected the highest levels of society, or whether the social and economic factors have been sufficient to balance out the climatic hazards, leading to no significant changes. This study will contribute to our understanding of the impact of climate change on breeding and consumption strategies in medieval society from a historical and social point of view. It combines various research methods to provide a comprehensive analysis of the changes in human-animal interactions during different climatic periods.Keywords: archaeology, animal economy, cooking, husbandry practices, climate change, France
Procedia PDF Downloads 572151 From Restraint to Obligation: The Protection of the Environment in Times of Armed Conflict
Authors: Aaron Walayat
Abstract:
Protection of the environment in international law has been one of the most developed in the context of international humanitarian law. This paper examines the history of the protection of the environment in times of armed conflict, beginning with the traditional notion of restraint observed in antiquity towards the obligation to protect the environment, examining the treaties and agreements, both binding and non-binding which have contributed to environmental protection in war. The paper begins with a discussion of the ancient concept of restraint. This section examines the social norms in favor of protection of the environment as observed in the Bible, Greco-Roman mythology, and even more contemporary literature. The study of the traditional rejection of total war establishes the social foundation on which the current legal regime has stemmed. The paper then studies the principle of restraint as codified in international humanitarian law. It mainly examines Additional Protocol I of the Geneva Convention of 1949 and existing international law concerning civilian objects and the principles of international humanitarian law in the classification between civilian objects and military objectives. The paper then explores the environment’s classification as both a military objective and as a civilian object as well as explores arguments in favor of the classification of the whole environment as a civilian object. The paper will then discuss the current legal regime surrounding the protection of the environment, discussing some declarations and conventions including the 1868 Declaration of St. Petersburg, the 1907 Hague Convention No. IV, the Geneva Conventions, and the 1976 Environmental Modification Convention. The paper concludes with the outline noting the movement from codification of the principles of restraint into the various treaties, agreements, and declarations of the current regime of international humanitarian law. This paper provides an analysis of the history and significance of the relationship between international humanitarian law as a major contributor to the growing field of international environmental law.Keywords: armed conflict, environment, legal regime, restraint
Procedia PDF Downloads 2032150 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 5142149 The Wear Recognition on Guide Surface Based on the Feature of Radar Graph
Authors: Youhang Zhou, Weimin Zeng, Qi Xie
Abstract:
Abstract: In order to solve the wear recognition problem of the machine tool guide surface, a new machine tool guide surface recognition method based on the radar-graph barycentre feature is presented in this paper. Firstly, the gray mean value, skewness, projection variance, flat degrees and kurtosis features of the guide surface image data are defined as primary characteristics. Secondly, data Visualization technology based on radar graph is used. The visual barycentre graphical feature is demonstrated based on the radar plot of multi-dimensional data. Thirdly, a classifier based on the support vector machine technology is used, the radar-graph barycentre feature and wear original feature are put into the classifier separately for classification and comparative analysis of classification and experiment results. The calculation and experimental results show that the method based on the radar-graph barycentre feature can detect the guide surface effectively.Keywords: guide surface, wear defects, feature extraction, data visualization
Procedia PDF Downloads 5172148 Combined Analysis of Land use Change and Natural Flow Path in Flood Analysis
Authors: Nowbuth Manta Devi, Rasmally Mohammed Hussein
Abstract:
Flood is one of the most devastating climate impacts that many countries are facing. Many different causes have been associated with the intensity of floods being recorded over time. Unplanned development, low carrying capacity of drains, clogged drains, construction in flood plains or increasing intensity of rainfall events. While a combination of these causes can certainly aggravate the flood conditions, in many cases, increasing drainage capacity has not reduced flood risk to the level that was expected. The present study analyzed the extent to which land use is contributing to aggravating impacts of flooding in a city. Satellite images have been analyzed over a period of 20 years at intervals of 5 years. Both unsupervised and supervised classification methods have been used with the image processing module of ArcGIS. The unsupervised classification was first compared to the basemap available in ArcGIS to get a first overview of the results. These results also aided in guiding data collection on-site for the supervised classification. The island of Mauritius is small, and there are large variations in land use over small areas, both within the built areas and in agricultural zones involving food crops. Larger plots of agricultural land under sugar cane plantations are relatively more easily identified. However, the growth stage and health of plants vary and this had to be verified during ground truthing. The results show that although there have been changes in land use as expected over a span of 20 years, this was not significant enough to cause a major increase in flood risk levels. A digital elevation model was analyzed for further understanding. It could not be noted that overtime, development tampered with natural flow paths in addition to increasing the impermeable areas. This situation results in backwater flows, hence increasing flood risks.Keywords: climate change, flood, natural flow paths, small islands
Procedia PDF Downloads 52147 Classification of Echo Signals Based on Deep Learning
Authors: Aisulu Tileukulova, Zhexebay Dauren
Abstract:
Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.Keywords: radar, neural network, convolutional neural network, echo signals
Procedia PDF Downloads 352