Search results for: canopy characters classification
2024 Earthquake Classification in Molluca Collision Zone Using Conventional Statistical Methods
Authors: H. J. Wattimanela, U. S. Passaribu, A. N. T. Puspito, S. W. Indratno
Abstract:
Molluca Collision Zone is located at the junction of the Eurasian plate, Australian, Pacific, and the Philippines. Between the Sangihe arc, west of the collision zone, and to the east of Halmahera arc is active collision and convex toward the Molluca Sea. This research will analyze the behavior of earthquake occurrence in Molluca Collision Zone related to the distributions of an earthquake in each partition regions, determining the type of distribution of a occurrence earthquake of partition regions, and the mean occurrence of earthquakes each partition regions, and the correlation between the partitions region. We calculate number of earthquakes using partition method and its behavioral using conventional statistical methods. The data used is the data type of shallow earthquakes with magnitudes ≥ 4 SR for the period 1964-2013 in the Molluca Collision Zone. From the results, we can classify partitioned regions based on the correlation into two classes: strong and very strong. This classification can be used for early warning system in disaster management.Keywords: molluca collision zone, partition regions, conventional statistical methods, earthquakes, classifications, disaster management
Procedia PDF Downloads 4992023 Weeds Density Affects Yield and Quality of Wheat Crop under Different Crop Densities
Authors: Ijaz Ahmad
Abstract:
Weed competition is one of the major biotic constraints in wheat crop productivity. Avena fatua L. and Silybum marianum (L.) Gaertn. are among the worst weeds of wheat, greatly deteriorating wheat quality subsequently reducing its market value. In this connection, two-year experiments were conducted in 2018 & 2019. Different seeding rate wheat viz; 80, 100, 120 and 140 kg ha-1 and different weeds ratio (A. fatua: S. marianum ) sown at the rate 1:8, 2:7, 3:6, 4:5, 5:4, 6:3, 7:2, 8:1 and 0:0 respectively. The weeds ratio and wheat densities are indirectly proportional. However, the wheat seed at the rate of 140 kg ha-1 has minimal weeds interference. Yield losses were 17.5% at weeds density 1:8 while 7.2% at 8:1. However, in wheat density, the highest percent losses were computed on 80 kg ha-1 while the lowest was recorded on 140 kg ha-1. Since due to the large leaf canopy of S. marianum other species can't sustain their growth. Hence, it has been concluded that S. marianum is the hotspot that causes reduction to the yield-related parameters, followed by A. fatua and the other weeds. Due to the morphological mimicry of A. fatua with wheat crop during the vegetative growth stage, it cannot be easily distinguished. Therefore, managing A. fatua and S. marianum before seed setting is recommended for reducing the future weed problem. Based on current studies, it is suggested that sowing wheat seed at the rate of 140 kg ha-1 is recommended to better compete with all the field weeds.Keywords: fat content, holly thistle, protein content, weed competition, wheat, wild oat
Procedia PDF Downloads 2072022 Distangling Biological Noise in Cellular Images with a Focus on Explainability
Authors: Manik Sharma, Ganapathy Krishnamurthi
Abstract:
The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.Keywords: cellular images, genetic perturbations, deep-learning, explainability
Procedia PDF Downloads 1132021 Detection and Classification of Rubber Tree Leaf Diseases Using Machine Learning
Authors: Kavyadevi N., Kaviya G., Gowsalya P., Janani M., Mohanraj S.
Abstract:
Hevea brasiliensis, also known as the rubber tree, is one of the foremost assets of crops in the world. One of the most significant advantages of the Rubber Plant in terms of air oxygenation is its capacity to reduce the likelihood of an individual developing respiratory allergies like asthma. To construct such a system that can properly identify crop diseases and pests and then create a database of insecticides for each pest and disease, we must first give treatment for the illness that has been detected. We shall primarily examine three major leaf diseases since they are economically deficient in this article, which is Bird's eye spot, algal spot and powdery mildew. And the recommended work focuses on disease identification on rubber tree leaves. It will be accomplished by employing one of the superior algorithms. Input, Preprocessing, Image Segmentation, Extraction Feature, and Classification will be followed by the processing technique. We will use time-consuming procedures that they use to detect the sickness. As a consequence, the main ailments, underlying causes, and signs and symptoms of diseases that harm the rubber tree are covered in this study.Keywords: image processing, python, convolution neural network (CNN), machine learning
Procedia PDF Downloads 772020 Classifications of Sleep Apnea (Obstructive, Central, Mixed) and Hypopnea Events Using Wavelet Packet Transform and Support Vector Machines (VSM)
Authors: Benghenia Hadj Abd El Kader
Abstract:
Sleep apnea events as obstructive, central, mixed or hypopnea are characterized by frequent breathing cessations or reduction in upper airflow during sleep. An advanced method for analyzing the patterning of biomedical signals to recognize obstructive sleep apnea and hypopnea is presented. In the aim to extract characteristic parameters, which will be used for classifying the above stated (obstructive, central, mixed) sleep apnea and hypopnea, the proposed method is based first on the analysis of polysomnography signals such as electrocardiogram signal (ECG) and electromyogram (EMG), then classification of the (obstructive, central, mixed) sleep apnea and hypopnea. The analysis is carried out using the wavelet transform technique in order to extract characteristic parameters whereas classification is carried out by applying the SVM (support vector machine) technique. The obtained results show good recognition rates using characteristic parameters.Keywords: obstructive, central, mixed, sleep apnea, hypopnea, ECG, EMG, wavelet transform, SVM classifier
Procedia PDF Downloads 3712019 Discrimination and Classification of Vestibular Neuritis Using Combined Fisher and Support Vector Machine Model
Authors: Amine Ben Slama, Aymen Mouelhi, Sondes Manoubi, Chiraz Mbarek, Hedi Trabelsi, Mounir Sayadi, Farhat Fnaiech
Abstract:
Vertigo is a sensation of feeling off balance; the cause of this symptom is very difficult to interpret and needs a complementary exam. Generally, vertigo is caused by an ear problem. Some of the most common causes include: benign paroxysmal positional vertigo (BPPV), Meniere's disease and vestibular neuritis (VN). In clinical practice, different tests of videonystagmographic (VNG) technique are used to detect the presence of vestibular neuritis (VN). The topographical diagnosis of this disease presents a large diversity in its characteristics that confirm a mixture of problems for usual etiological analysis methods. In this study, a vestibular neuritis analysis method is proposed with videonystagmography (VNG) applications using an estimation of pupil movements in the case of an uncontrolled motion to obtain an efficient and reliable diagnosis results. First, an estimation of the pupil displacement vectors using with Hough Transform (HT) is performed to approximate the location of pupil region. Then, temporal and frequency features are computed from the rotation angle variation of the pupil motion. Finally, optimized features are selected using Fisher criterion evaluation for discrimination and classification of the VN disease.Experimental results are analyzed using two categories: normal and pathologic. By classifying the reduced features using the Support Vector Machine (SVM), 94% is achieved as classification accuracy. Compared to recent studies, the proposed expert system is extremely helpful and highly effective to resolve the problem of VNG analysis and provide an accurate diagnostic for medical devices.Keywords: nystagmus, vestibular neuritis, videonystagmographic system, VNG, Fisher criterion, support vector machine, SVM
Procedia PDF Downloads 1392018 Machine Learning Techniques in Bank Credit Analysis
Authors: Fernanda M. Assef, Maria Teresinha A. Steiner
Abstract:
The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.Keywords: artificial neural networks (ANNs), classifier algorithms, credit risk assessment, logistic regression, machine Learning, support vector machines
Procedia PDF Downloads 1042017 Machine Learning Approach for Yield Prediction in Semiconductor Production
Authors: Heramb Somthankar, Anujoy Chakraborty
Abstract:
This paper presents a classification study on yield prediction in semiconductor production using machine learning approaches. A complicated semiconductor production process is generally monitored continuously by signals acquired from sensors and measurement sites. A monitoring system contains a variety of signals, all of which contain useful information, irrelevant information, and noise. In the case of each signal being considered a feature, "Feature Selection" is used to find the most relevant signals. The open-source UCI SECOM Dataset provides 1567 such samples, out of which 104 fail in quality assurance. Feature extraction and selection are performed on the dataset, and useful signals were considered for further study. Afterward, common machine learning algorithms were employed to predict whether the signal yields pass or fail. The most relevant algorithm is selected for prediction based on the accuracy and loss of the ML model.Keywords: deep learning, feature extraction, feature selection, machine learning classification algorithms, semiconductor production monitoring, signal processing, time-series analysis
Procedia PDF Downloads 1102016 Pattern Recognition Based on Simulation of Chemical Senses (SCS)
Authors: Nermeen El Kashef, Yasser Fouad, Khaled Mahar
Abstract:
No AI-complete system can model the human brain or behavior, without looking at the totality of the whole situation and incorporating a combination of senses. This paper proposes a Pattern Recognition model based on Simulation of Chemical Senses (SCS) for separation and classification of sign language. The model based on human taste controlling strategy. The main idea of the introduced model is motivated by the facts that the tongue cluster input substance into its basic tastes first, and then the brain recognizes its flavor. To implement this strategy, two level architecture is proposed (this is inspired from taste system). The separation-level of the architecture focuses on hand posture cluster, while the classification-level of the architecture to recognizes the sign language. The efficiency of proposed model is demonstrated experimentally by recognizing American Sign Language (ASL) data set. The recognition accuracy obtained for numbers of ASL is 92.9 percent.Keywords: artificial intelligence, biocybernetics, gustatory system, sign language recognition, taste sense
Procedia PDF Downloads 2952015 Unearthing Air Traffic Control Officers Decision Instructional Patterns From Simulator Data for Application in Human Machine Teams
Authors: Zainuddin Zakaria, Sun Woh Lye
Abstract:
Despite the continuous advancements in automated conflict resolution tools, there is still a low rate of adoption of automation from Air Traffic Control Officers (ATCOs). Trust or acceptance in these tools and conformance to the individual ATCO preferences in strategy execution for conflict resolution are two key factors that impact their use. This paper proposes a methodology to unearth and classify ATCO conflict resolution strategies from simulator data of trained and qualified ATCOs. The methodology involves the extraction of ATCO executive control actions and the establishment of a system of strategy resolution classification based on ATCO radar commands and prevailing flight parameters in deconflicting a pair of aircraft. Six main strategies used to handle various categories of conflict were identified and discussed. It was found that ATCOs were about twice more likely to choose only vertical maneuvers in conflict resolution compared to horizontal maneuvers or a combination of both vertical and horizontal maneuvers.Keywords: air traffic control strategies, conflict resolution, simulator data, strategy classification system
Procedia PDF Downloads 1492014 Presence and Absence: The Use of Photographs in Paris, Texas
Authors: Yi-Ting Wang, Wen-Shu Lai
Abstract:
The subject of this paper is the photography in the 1983 film Paris, Texas, directed by Wim Wenders. Wenders is well known as a film director as well as a photographer. We have found that photography is shown as a photographic element in many of his films. Some of these photographs serve as details within the films, while others play important roles that are relevant to the story. This paper aims to consider photographs in film as a specific type of text, which is the output of both still photography and the film itself. In the film Paris, Texas, three sets of important photographs appear whose symbolic meanings are as dialectical as their text types. The relationship between the existence of these photos and the storyline is both dependent and isolated. The film’s images fly by and progress into other images, while the photos in the film serve a unique narrative function by stopping the continuously flowing images thus provide the viewer a space for imagination and contemplation. They are more than just artistic forms; they also contained multiple meanings. The photographs in Paris, Texas play the role of both presence and absence according to their shifting meanings. There are references to their presence: photographs exist between film time and narrative time, so in terms of the interaction between the characters in the film, photographs are a common symbol of the beginning and end of the characters’ journeys. In terms of the audience, the film’s photographs are a link in the viewing frame structure, through which the creative motivation of the film director can be explored. Photographs also point to the absence of certain objects: the scenes in the photos represent an imaginary map of emotion. The town of Paris, Texas is therefore isolated from the physical presence of the photograph, and is far more abstract than the reality in the film. This paper embraces the ambiguous nature of photography and demonstrates its presence and absence in film with regard to the meaning of text. However, it is worth reflecting that the temporary nature of the interpretation of the film’s photographs is far greater than any other type of photographic text: the characteristics of the text cause the interpretation results to change along with the variations in the interpretation process, which makes their meaning a dynamic process. The photographs’ presence or absence in the context of Paris, Texas also demonstrates the presence and absence of the creator, time, the truth, and the imagination. The film becomes more complete as a result of the revelation of the photographs, while the intertextual connection between these two forms simultaneously provides multiple possibilities for the interpretation of the photographs in the film.Keywords: film, Paris, Texas, photography, Wim Wenders
Procedia PDF Downloads 3202013 Building Envelope Engineering and Typologies for Complex Architectures: Composition and Functional Methodologies
Authors: Massimiliano Nastri
Abstract:
The study examines the façade systems according to the constitutive and typological characters, as well as the functional and applicative requirements such as the expressive, constructive, and interactive criteria towards the environmental, perceptive, and energy conditions. The envelope systems are understood as instruments of mediation, interchange, and dynamic interaction between environmental conditions. The façades are observed for the sustainable concept of eco-efficient envelopes, selective and multi-purpose filters, adaptable and adjustable according to the environmental performance.Keywords: typologies of façades, environmental and energy sustainability, interaction and perceptive mediation, technical skins
Procedia PDF Downloads 1532012 Modeling the Present Economic and Social Alienation of Working Class in South Africa in the Musical Production ‘from Marikana to Mahagonny’ at Durban University of Technology (DUT)
Authors: Pamela Tancsik
Abstract:
The stage production in 2018, titled ‘From‘Marikana to Mahagonny’, began with a prologue in the form of the award-winning documentary ‘Miners Shot Down' by Rehad Desai, followed by Brecht/Weill’s song play or scenic cantata ‘Mahagonny’, premièred in Baden-Baden 1927. The central directorial concept of the DUT musical production ‘From Marikana to Mahagonny’ was to show a connection between the socio-political alienation of mineworkers in present-day South Africa and Brecht’s alienation effect in his scenic cantata ‘Mahagonny’. Marikana is a mining town about 50 km west of South Africa’s capital Pretoria. Mahagonny is a fantasy name for a utopian mining town in the United States. The characters, setting, and lyrics refer to America with of songs like ‘Benares’ and ‘Moon of Alabama’ and the use of typical American inventions such as dollars, saloons, and the telephone. The six singing characters in ‘Mahagonny’ all have typical American names: Charlie, Billy, Bobby, Jimmy, and the two girls they meet later are called Jessie and Bessie. The four men set off to seek Mahagonny. For them, it is the ultimate dream destination promising the fulfilment of all their desires, such as girls, alcohol, and dollars – in short, materialistic goals. Instead of finding a paradise, they experience how money and the practice of exploitive capitalism, and the lack of any moral and humanity is destroying their lives. In the end, Mahagonny gets demolished by a hurricane, an event which happened in 1926 in the United States. ‘God’ in person arrives disillusioned and bitter, complaining about violent and immoral mankind. In the end, he sends them all to hell. Charlie, Billy, Bobby, and Jimmy reply that this punishment does not mean anything to them because they have already been in hell for a long time – hell on earth is a reality, so the threat of hell after life is meaningless. Human life was also taken during the stand-off between striking mineworkers and the South African police on 16 August 2012. Miners from the Lonmin Platinum Mine went on an illegal strike, equipped with bush knives and spears. They were striking because their living conditions had never improved; they still lived in muddy shacks with no running water and electricity. Wages were as low as R4,000 (South African Rands), equivalent to just over 200 Euro per month. By August 2012, the negotiations between Lonmin management and the mineworkers’ unions, asking for a minimum wage of R12,500 per month, had failed. Police were sent in by the Government, and when the miners did not withdraw, the police shot at them. 34 were killed, some by bullets in their backs while running away and trying to hide behind rocks. In the musical play ‘From Marikana to Mahagonny’ audiences in South Africa are confronted with a documentary about Marikana, followed by Brecht/Weill’s scenic cantata, highlighting the tragic parallels between the Mahagonny story and characters from 1927 America and the Lonmin workers today in South Africa, showing that in 95 years, capitalism has not changed.Keywords: alienation, brecht/Weill, mahagonny, marikana/South Africa, musical theatre
Procedia PDF Downloads 982011 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter
Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai
Abstract:
A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS
Procedia PDF Downloads 872010 Classification of Political Affiliations by Reduced Number of Features
Authors: Vesile Evrim, Aliyu Awwal
Abstract:
By the evolvement in technology, the way of expressing opinions switched the direction to the digital world. The domain of politics as one of the hottest topics of opinion mining research merged together with the behavior analysis for affiliation determination in text which constitutes the subject of this paper. This study aims to classify the text in news/blogs either as Republican or Democrat with the minimum number of features. As an initial set, 68 features which 64 are constituted by Linguistic Inquiry and Word Count (LIWC) features are tested against 14 benchmark classification algorithms. In the later experiments, the dimensions of the feature vector reduced based on the 7 feature selection algorithms. The results show that Decision Tree, Rule Induction and M5 Rule classifiers when used with SVM and IGR feature selection algorithms performed the best up to 82.5% accuracy on a given dataset. Further tests on a single feature and the linguistic based feature sets showed the similar results. The feature “function” as an aggregate feature of the linguistic category, is obtained as the most differentiating feature among the 68 features with 81% accuracy by itself in classifying articles either as Republican or Democrat.Keywords: feature selection, LIWC, machine learning, politics
Procedia PDF Downloads 3832009 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 1772008 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 1312007 Classification of Health Risk Factors to Predict the Risk of Falling in Older Adults
Authors: L. Lindsay, S. A. Coleman, D. Kerr, B. J. Taylor, A. Moorhead
Abstract:
Cognitive decline and frailty is apparent in older adults leading to an increased likelihood of the risk of falling. Currently health care professionals have to make professional decisions regarding such risks, and hence make difficult decisions regarding the future welfare of the ageing population. This study uses health data from The Irish Longitudinal Study on Ageing (TILDA), focusing on adults over the age of 50 years, in order to analyse health risk factors and predict the likelihood of falls. This prediction is based on the use of machine learning algorithms whereby health risk factors are used as inputs to predict the likelihood of falling. Initial results show that health risk factors such as long-term health issues contribute to the number of falls. The identification of such health risk factors has the potential to inform health and social care professionals, older people and their family members in order to mitigate daily living risks.Keywords: classification, falls, health risk factors, machine learning, older adults
Procedia PDF Downloads 1502006 Weed Classification Using a Two-Dimensional Deep Convolutional Neural Network
Authors: Muhammad Ali Sarwar, Muhammad Farooq, Nayab Hassan, Hammad Hassan
Abstract:
Pakistan is highly recognized for its agriculture and is well known for producing substantial amounts of wheat, cotton, and sugarcane. However, some factors contribute to a decline in crop quality and a reduction in overall output. One of the main factors contributing to this decline is the presence of weed and its late detection. This process of detection is manual and demands a detailed inspection to be done by the farmer itself. But by the time detection of weed, the farmer will be able to save its cost and can increase the overall production. The focus of this research is to identify and classify the four main types of weeds (Small-Flowered Cranesbill, Chick Weed, Prickly Acacia, and Black-Grass) that are prevalent in our region’s major crops. In this work, we implemented three different deep learning techniques: YOLO-v5, Inception-v3, and Deep CNN on the same Dataset, and have concluded that deep convolutions neural network performed better with an accuracy of 97.45% for such classification. In relative to the state of the art, our proposed approach yields 2% better results. We devised the architecture in an efficient way such that it can be used in real-time.Keywords: deep convolution networks, Yolo, machine learning, agriculture
Procedia PDF Downloads 1192005 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3182004 A Critical Geography of Reforestation Program in Ghana
Authors: John Narh
Abstract:
There is high rate of deforestation in Ghana due to agricultural expansion, illegal mining and illegal logging. While it is attempting to address the illegalities, Ghana has also initiated a reforestation program known as the Modified Taungya System (MTS). Within the MTS framework, farmers are allocated degraded forestland and provided with tree seedlings to practice agroforestry until the trees form canopy. Yet, the political, ecological and economic models that inform the selection of tree species, the motivations of participating farmers as well as the factors that accounts for differential access to the land and performance of farmers engaged in the program lie underexplored. Using a sequential explanatory mixed methods approach in five forest-fringe communities in the Eastern Region of Ghana, the study reveals that economic factors and Ghana’s commitment to international conventions on the environment underpin the selection of tree species for the MTS program. Social network and access to remittances play critical roles in having access to, and enhances poor farmers’ chances in the program respectively. Farmers are more motivated by the access to degraded forestland to cultivate food crops than having a share in the trees that they plant. As such, in communities where participating farmers are not informed about their benefit in the tree that they plant, the program is largely unsuccessful.Keywords: translocality, deforestation, forest management, social network
Procedia PDF Downloads 972003 Biophysically Motivated Phylogenies
Authors: Catherine Felce, Lior Pachter
Abstract:
Current methods for building phylogenetic trees from gene expression data consider mean expression levels. With single-cell technologies, we can leverage more information about cell dynamics by considering the entire distribution of gene expression across cells. Using biophysical modeling, we propose a method for constructing phylogenetic trees from scRNA-seq data, building on Felsenstein's method of continuous characters. This method can highlight genes whose level of expression may be unchanged between species, but whose rates of transcription/decay may have evolved over time.Keywords: phylogenetics, single-cell, biophysical modeling, transcription
Procedia PDF Downloads 562002 From Restraint to Obligation: The Protection of the Environment in Times of Armed Conflict
Authors: Aaron Walayat
Abstract:
Protection of the environment in international law has been one of the most developed in the context of international humanitarian law. This paper examines the history of the protection of the environment in times of armed conflict, beginning with the traditional notion of restraint observed in antiquity towards the obligation to protect the environment, examining the treaties and agreements, both binding and non-binding which have contributed to environmental protection in war. The paper begins with a discussion of the ancient concept of restraint. This section examines the social norms in favor of protection of the environment as observed in the Bible, Greco-Roman mythology, and even more contemporary literature. The study of the traditional rejection of total war establishes the social foundation on which the current legal regime has stemmed. The paper then studies the principle of restraint as codified in international humanitarian law. It mainly examines Additional Protocol I of the Geneva Convention of 1949 and existing international law concerning civilian objects and the principles of international humanitarian law in the classification between civilian objects and military objectives. The paper then explores the environment’s classification as both a military objective and as a civilian object as well as explores arguments in favor of the classification of the whole environment as a civilian object. The paper will then discuss the current legal regime surrounding the protection of the environment, discussing some declarations and conventions including the 1868 Declaration of St. Petersburg, the 1907 Hague Convention No. IV, the Geneva Conventions, and the 1976 Environmental Modification Convention. The paper concludes with the outline noting the movement from codification of the principles of restraint into the various treaties, agreements, and declarations of the current regime of international humanitarian law. This paper provides an analysis of the history and significance of the relationship between international humanitarian law as a major contributor to the growing field of international environmental law.Keywords: armed conflict, environment, legal regime, restraint
Procedia PDF Downloads 2072001 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 5152000 Literary Interpretation and Systematic-Structural Analysis of the Titles of the Works “The Day Lasts More than a Hundred Years”, “Doomsday”
Authors: Bahor Bahriddinovna Turaeva
Abstract:
The article provides a structural analysis of the titles of the famous Kyrgyz writer Chingiz Aitmatov’s creative works “The Day Lasts More Than a Hundred Years”, “Doomsday”. The author’s creative purpose in naming the work of art, the role of the elements of the plot, and the composition of the novels in revealing the essence of the title are explained. The criteria that are important in naming the author’s works in different genres are classified, and the titles that mean artistic time and artistic space are studied separately. Chronotope is being concerned as the literary-aesthetic category in world literary studies, expressing the scope of the universe interpretation, the author’s outlook and imagination regarding the world foundation, defining personages, and the composition means of expressing the sequence and duration of the events. A creative comprehension of the chronotope as a means of arranging the work composition, structure and constructing an epic field of the text demands a special approach to understanding the aesthetic character of the work. Since the chronotope includes all the elements of a fictional work, it is impossible to present the plot, composition, conflict, system of characters, feelings, and mood of the characters without the description of the chronotope. In the following development of the scientific-theoretical thought in the world, the chronotope is accepted to be one of the poetic means to demonstrate reality as well as to be a literary process that is basic for the expression of reality in the compositional construction and illustration of the plot relying on the writer’s intention and the ideological conception of the literary work. Literary time enables one to cognate the literary world picture created by the author in terms of the descriptive subject and object of the work. Therefore, one of the topical tasks of modern Uzbek literary studies is to describe historical evidence, event, the life of outstanding people, the chronology of the near past based on the literary time; on the example of the creative works of a certain period, creators or an individual writer are analyzed in separate or comparative-typological aspect.Keywords: novel, title, chronotope, motive, epigraph, analepsis, structural analysis, plot line, composition
Procedia PDF Downloads 761999 The Wear Recognition on Guide Surface Based on the Feature of Radar Graph
Authors: Youhang Zhou, Weimin Zeng, Qi Xie
Abstract:
Abstract: In order to solve the wear recognition problem of the machine tool guide surface, a new machine tool guide surface recognition method based on the radar-graph barycentre feature is presented in this paper. Firstly, the gray mean value, skewness, projection variance, flat degrees and kurtosis features of the guide surface image data are defined as primary characteristics. Secondly, data Visualization technology based on radar graph is used. The visual barycentre graphical feature is demonstrated based on the radar plot of multi-dimensional data. Thirdly, a classifier based on the support vector machine technology is used, the radar-graph barycentre feature and wear original feature are put into the classifier separately for classification and comparative analysis of classification and experiment results. The calculation and experimental results show that the method based on the radar-graph barycentre feature can detect the guide surface effectively.Keywords: guide surface, wear defects, feature extraction, data visualization
Procedia PDF Downloads 5191998 Relevance of Copyright and Trademark in the Gaming Industry
Authors: Deeksha Karunakar
Abstract:
The gaming industry is one of the biggest industries in the world. Video games are interactive works of authorship that require the execution of a computer programme on specialized hardware but which also incorporate a wide variety of other artistic mediums, such as music, scripts, stories, video, paintings, and characters, into which the player takes an active role. Therefore, video games are not made as singular, simple works but rather as a collection of elements that, if they reach a certain level of originality and creativity, can each be copyrighted on their own. A video game is made up of a wide variety of parts, all of which combine to form the overall sensation that we, the players, have while playing. The entirety of the components is implemented in the form of software code, which is then translated into the game's user interface. Even while copyright protection is already in place for the coding of software, the work that is produced because of that coding can also be protected by copyright. This includes the game's storyline or narrative, its characters, and even elements of the code on their own. In each sector, there is a potential legal framework required, and the gaming industry also requires legal frameworks. This represents the importance of intellectual property laws in each sector. This paper will explore the beginnings of video games, the various aspects of game copyrights, and the approach of the courts, including examples of a few different instances. Although the creative arts have always been known to draw inspiration from and build upon the works of others, it has not always been simple to evaluate whether a game has been cloned. The video game business is experiencing growth as it has never seen before today. The majority of today's video games are both pieces of software and works of audio-visual art. Even though the existing legal framework does not have a clause specifically addressing video games, it is clear that there is a great many alternative means by which this protection can be granted. This paper will represent the importance of copyright and trademark laws in the gaming industry and its regulations with the help of relevant case laws via utilizing doctrinal methodology to support its findings. The aim of the paper is to make aware of the applicability of intellectual property laws in the gaming industry and how the justice system is evolving to adapt to such new industries. Furthermore, it will provide in-depth knowledge of their relationship with each other.Keywords: copyright, DMCA, gaming industry, trademark, WIPO
Procedia PDF Downloads 691997 Combined Analysis of Land use Change and Natural Flow Path in Flood Analysis
Authors: Nowbuth Manta Devi, Rasmally Mohammed Hussein
Abstract:
Flood is one of the most devastating climate impacts that many countries are facing. Many different causes have been associated with the intensity of floods being recorded over time. Unplanned development, low carrying capacity of drains, clogged drains, construction in flood plains or increasing intensity of rainfall events. While a combination of these causes can certainly aggravate the flood conditions, in many cases, increasing drainage capacity has not reduced flood risk to the level that was expected. The present study analyzed the extent to which land use is contributing to aggravating impacts of flooding in a city. Satellite images have been analyzed over a period of 20 years at intervals of 5 years. Both unsupervised and supervised classification methods have been used with the image processing module of ArcGIS. The unsupervised classification was first compared to the basemap available in ArcGIS to get a first overview of the results. These results also aided in guiding data collection on-site for the supervised classification. The island of Mauritius is small, and there are large variations in land use over small areas, both within the built areas and in agricultural zones involving food crops. Larger plots of agricultural land under sugar cane plantations are relatively more easily identified. However, the growth stage and health of plants vary and this had to be verified during ground truthing. The results show that although there have been changes in land use as expected over a span of 20 years, this was not significant enough to cause a major increase in flood risk levels. A digital elevation model was analyzed for further understanding. It could not be noted that overtime, development tampered with natural flow paths in addition to increasing the impermeable areas. This situation results in backwater flows, hence increasing flood risks.Keywords: climate change, flood, natural flow paths, small islands
Procedia PDF Downloads 161996 Classification of Echo Signals Based on Deep Learning
Authors: Aisulu Tileukulova, Zhexebay Dauren
Abstract:
Radar plays an important role because it is widely used in civil and military fields. Target detection is one of the most important radar applications. The accuracy of detecting inconspicuous aerial objects in radar facilities is lower against the background of noise. Convolutional neural networks can be used to improve the recognition of this type of aerial object. The purpose of this work is to develop an algorithm for recognizing aerial objects using convolutional neural networks, as well as training a neural network. In this paper, the structure of a convolutional neural network (CNN) consists of different types of layers: 8 convolutional layers and 3 layers of a fully connected perceptron. ReLU is used as an activation function in convolutional layers, while the last layer uses softmax. It is necessary to form a data set for training a neural network in order to detect a target. We built a Confusion Matrix of the CNN model to measure the effectiveness of our model. The results showed that the accuracy when testing the model was 95.7%. Classification of echo signals using CNN shows high accuracy and significantly speeds up the process of predicting the target.Keywords: radar, neural network, convolutional neural network, echo signals
Procedia PDF Downloads 3541995 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 154