Search results for: human action classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12136

Search results for: human action classification

11176 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 619
11175 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 56
11174 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection

Authors: Yaojun Wang, Yaoqing Wang

Abstract:

Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.

Keywords: case-based reasoning, decision tree, stock selection, machine learning

Procedia PDF Downloads 412
11173 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 176
11172 The Aspect of the Human Bias in Decision Making within Quality Management Systems and LEAN Theory

Authors: Adriana Avila Zuniga Nordfjeld

Abstract:

This paper provides a literature review to document the state of the art with respect to handling 'human bias' in decision making within the established quality management systems (QMS) and LEAN theory, in the context of shipbuilding. Previous research shows that in shipbuilding there is a huge deviation from the planned man-hours under the project management to the actual man-hours used because of errors in planning and reworks caused by human bias in the information flows among others. This reduces the efficiency and increases operational costs. Thus, the research question is how QMS and LEAN handle biases. The findings show the gap in studying the integration of methods to handle human bias in decision making into QMS and lean, not only within shipbuilding but also in general. Theoretical and practical implications are discussed for researchers and practitioners in the areas of decision making QMS, LEAN, and future research is suggested.

Keywords: human bias, decision making, LEAN shipbuilding, quality management systems

Procedia PDF Downloads 539
11171 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 242
11170 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: bancassurance, business model, non life bancassurance, insurance business value drivers

Procedia PDF Downloads 292
11169 Musical Composition by Computer with Inspiration from Files of Different Media Types

Authors: Cassandra Pratt Romero, Andres Gomez de Silva Garza

Abstract:

This paper describes a computational system designed to imitate human inspiration during musical composition. The system is called MIS (Musical Inspiration Simulator). The MIS system is inspired by media to which human beings are exposed daily (visual, textual, or auditory) to create new musical compositions based on the emotions detected in said media. After building the system we carried out a series of evaluations with volunteer users who used MIS to compose music based on images, texts, and audio files. The volunteers were asked to judge the harmoniousness and innovation in the system's compositions. An analysis of the results points to the difficulty of computational analysis of the characteristics of the media to which we are exposed daily, as human emotions have a subjective character. This observation will direct future improvements in the system.

Keywords: human inspiration, musical composition, musical composition by computer, theory of sensation and human perception

Procedia PDF Downloads 171
11168 Kant’s Conception of Human Dignity and the Importance of Singularity within Commonality

Authors: Francisco Lobo

Abstract:

Kant’s household theory of human dignity as a common feature of all rational beings is the starting point of any intellectual endeavor to unravel the implications of this normative notion. Yet, it is incomplete, as it neglects considering the importance of the singularity or uniqueness of the individual. In a first, deconstructive stage, this paper describes the Kantian account of human dignity as one among many conceptions of human dignity. It reads carefully into the original wording used by Kant in German and its English translations, as well as the works of modern commentators, to identify its shortcomings. In a second, constructive stage, it then draws on the theories of Aristotle, Alexis de Tocqueville, John Stuart Mill, and Hannah Arendt to try and enhance the Kantian conception, in the sense that these authors give major importance to the singularity of the individual. The Kantian theory can be perfected by including elements from the works of these authors, while at the same time being mindful of the dangers entailed in focusing too much on singularity. The conclusion of this paper is that the Kantian conception of human dignity can be enhanced if it acknowledges that not only morality has dignity, but also the irreplaceable human individual to the extent that she is a narrative, original creature with the potential to act morally.

Keywords: commonality, dignity, Kant, singularity

Procedia PDF Downloads 276
11167 Comparison of Machine Learning and Deep Learning Algorithms for Automatic Classification of 80 Different Pollen Species

Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie

Abstract:

Palynology is a field of interest in many disciplines due to its multiple applications: chronological dating, climatology, allergy treatment, and honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time consuming task that requires the intervention of experts in the field, which are becoming increasingly rare due to economic and social conditions. That is why the need for automation of this task is urgent. A lot of studies have investigated the subject using different standard image processing descriptors and sometimes hand-crafted ones.In this work, we make a comparative study between classical feature extraction methods (Shape, GLCM, LBP, and others) and Deep Learning (CNN, Autoencoders, Transfer Learning) to perform a recognition task over 80 regional pollen species. It has been found that the use of Transfer Learning seems to be more precise than the other approaches

Keywords: pollens identification, features extraction, pollens classification, automated palynology

Procedia PDF Downloads 128
11166 Human Development Strengthening against Terrorism in ASEAN East Asia and Pacific: An Econometric Analysis

Authors: Tismazammi Mustafa, Jaharudin Padli

Abstract:

The frequency of terrorism is increasing throughout years that is resulting in loss of life, damaging people’s property, and destructing the environment. The incident of terrorism is not stationed in one particular country but has spread and scattered in other countries hence causing an increase in the number of terrorism cases. Thus, this paper aims to investigate the factors of human development upon the terrorism in East Asia and Pacific countries. This study used a panel ARDL model, in which it enables to capture the long run and the short run relationship among the variables of interest. Logit Model for Binary data is also used, in which to representing an attributes of dependent variables. This study focuses on several human development variables namely GDP per capita, population, human capital, land area, and technologies. The empirical finding revealed that the GDP per capita, population, human capital, land area, and technologies are positively and statistically significant in influencing the terrorism. Thus, the finding in this study will present as grounds to preserve human rights and develop public awareness and will offer guidelines to policy makers, emergency managers, first responders, public health workers, physicians, and other researchers.

Keywords: terrorism, East Asia and Pacific, human development, econometric analysis

Procedia PDF Downloads 408
11165 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, fault location, underground cable, wavelet transform

Procedia PDF Downloads 503
11164 Identification of Toxic Metal Deposition in Food Cycle and Its Associated Public Health Risk

Authors: Masbubul Ishtiaque Ahmed

Abstract:

Food chain contamination by heavy metals has become a critical issue in recent years because of their potential accumulation in bio systems through contaminated water, soil and irrigation water. Industrial discharge, fertilizers, contaminated irrigation water, fossil fuels, sewage sludge and municipality wastes are the major sources of heavy metal contamination in soils and subsequent uptake by crops. The main objectives of this project were to determine the levels of minerals, trace elements and heavy metals in major foods and beverages consumed by the poor and non-poor households of Dhaka city and assess the dietary risk exposure to heavy metal and trace metal contamination and potential health implications as well as recommendations for action. Heavy metals are naturally occurring elements that have a high atomic weight and a density of at least 5 times greater than that of water. Their multiple industrial, domestic, agricultural, medical and technological applications have led to their wide distribution in the environment; raising concerns over their potential effects on human health and the environment. Their toxicity depends on several factors including the dose, route of exposure, and chemical species, as well as the age, gender, genetics, and nutritional status of exposed individuals. Because of their high degree of toxicity, arsenic, cadmium, chromium, lead, and mercury rank among the priority metals that are of public health significance. These metallic elements are considered systemic toxicants that are known to induce multiple organ damage, even at lower levels of exposure. This review provides an analysis of their environmental occurrence, production and use, potential for human exposure, and molecular mechanisms of toxicity, and carcinogenicity.

Keywords: food chain, determine the levels of minerals, trace elements, heavy metals, production and use, human exposure, toxicity, carcinogenicity

Procedia PDF Downloads 278
11163 Evaluation of the Matching Optimization of Human-Machine Interface Matching in the Cab

Authors: Yanhua Ma, Lu Zhai, Xinchen Wang, Hongyu Liang

Abstract:

In this paper, by understanding the development status of the human-machine interface in today's automobile cab, a subjective and objective evaluation system for evaluating the optimization of human-machine interface matching in automobile cab was established. The man-machine interface of the car cab was divided into a software interface and a hard interface. Objective evaluation method of software human factor analysis is used to evaluate the hard interface matching; The analytic hierarchy process is used to establish the evaluation index system for the software interface matching optimization, and the multi-level fuzzy comprehensive evaluation method is used to evaluate hard interface machine. This article takes Dongfeng Sokon (DFSK) C37 model automobile as an example. The evaluation method given in the paper is used to carry out relevant analysis and evaluation, and corresponding optimization suggestions are given, which have certain reference value for designers.

Keywords: analytic hierarchy process, fuzzy comprehension evaluation method, human-machine interface, matching optimization, software human factor analysis

Procedia PDF Downloads 137
11162 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 335
11161 Reliability Assessment and Failure Detection in a Complex Human-Machine System Using Agent-Based and Human Decision-Making Modeling

Authors: Sanjal Gavande, Thomas Mazzuchi, Shahram Sarkani

Abstract:

In a complex aerospace operational environment, identifying failures in a procedure involving multiple human-machine interactions are difficult. These failures could lead to accidents causing loss of hardware or human life. The likelihood of failure further increases if operational procedures are tested for a novel system with multiple human-machine interfaces and with no prior performance data. The existing approach in the literature of reviewing complex operational tasks in a flowchart or tabular form doesn’t provide any insight into potential system failures due to human decision-making ability. To address these challenges, this research explores an agent-based simulation approach for reliability assessment and fault detection in complex human-machine systems while utilizing a human decision-making model. The simulation will predict the emergent behavior of the system due to the interaction between humans and their decision-making capability with the varying states of the machine and vice-versa. Overall system reliability will be evaluated based on a defined set of success-criteria conditions and the number of recorded failures over an assigned limit of Monte Carlo runs. The study also aims at identifying high-likelihood failure locations for the system. The research concludes that system reliability and failures can be effectively calculated when individual human and machine agent states are clearly defined. This research is limited to the operations phase of a system lifecycle process in an aerospace environment only. Further exploration of the proposed agent-based and human decision-making model will be required to allow for a greater understanding of this topic for application outside of the operations domain.

Keywords: agent-based model, complex human-machine system, human decision-making model, system reliability assessment

Procedia PDF Downloads 160
11160 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour

Procedia PDF Downloads 341
11159 Location and Group Specific Differences in Human-Macaque Interactions in Singapore: Implications for Conflict Management

Authors: Srikantan L. Jayasri, James Gan

Abstract:

The changes in Singapore’s land use, natural preference of long-tailed macaques (Macaca fascicularis) to live in forest edges and their adaptability has led to interface between humans and macaques. Studies have shown that two-third of human-macaque interactions in Singapore were related to human food. We aimed to assess differences among macaques groups in their dependence on human food and interaction with humans as indicators of the level of interface. Field observations using instantaneous scan sampling and all occurrence ad-lib sampling were carried out for 23 macaque groups over 28 days recording 71.5 hours of observations. Data on macaque behaviour, demography, frequency, and nature of human-macaque interactions were collected. None of the groups were found to completely rely on human food source. Of the 23 groups, 40% of them were directly or indirectly provisioned by humans. One-third of the groups observed engaged in some form of interactions with the humans. Three groups that were directly fed by humans contributed to 83% of the total human-macaque interactions observed during the study. Our study indicated that interactions between humans and macaques exist in specific groups and in those fed by humans regularly. Although feeding monkeys is illegal in Singapore, such incidents seem to persist in specific locations. We emphasize the importance of group and location-specific assessment of the existing human-wildlife interactions. Conflict management strategies developed should be location specific to address the cause of interactions.

Keywords: primates, Southeast Asia, wildlife management, Singapore

Procedia PDF Downloads 473
11158 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 130
11157 Attention Multiple Instance Learning for Cancer Tissue Classification in Digital Histopathology Images

Authors: Afaf Alharbi, Qianni Zhang

Abstract:

The identification of malignant tissue in histopathological slides holds significant importance in both clinical settings and pathology research. This paper introduces a methodology aimed at automatically categorizing cancerous tissue through the utilization of a multiple-instance learning framework. This framework is specifically developed to acquire knowledge of the Bernoulli distribution of the bag label probability by employing neural networks. Furthermore, we put forward a neural network based permutation-invariant aggregation operator, equivalent to attention mechanisms, which is applied to the multi-instance learning network. Through empirical evaluation of an openly available colon cancer histopathology dataset, we provide evidence that our approach surpasses various conventional deep learning methods.

Keywords: attention multiple instance learning, MIL and transfer learning, histopathological slides, cancer tissue classification

Procedia PDF Downloads 99
11156 The Communist Party of China’s Approach to Human Rights and the Death Penalty in China since 1979

Authors: Huang Gui

Abstract:

The issues of human rights and death penalty are always drawing attentions from international scholars, critics and observers, activities and Chinese scholars, and most of them looking at these problems are just doing with such legal or political from a single perspective, but the real relationship between Chinese political regime and legislation is often ignored. In accordance with the Constitution of P.R.C., Communist Party of China (CPC) does not merely play a key role in political field, but in legislation and law enforcement as well. Therefore, the legislation has to implement the party’s theory and outlook, and realize the party’s policies. So is the death penalty system, though it is only concrete punishment system. Considering this point, basic upon the introducing the relationship between CPC and legislation, this paper would like to explore the shifting of CPC’s outlook on human rights and the death penalty system changes in different eras. In Maoist era, the issue of human rights was rejected and deemed as an exclusion zone, and the death penalty was unjustifiably imposed; human rights were politically recognized and accepted in Deng era, but CPC has its own viewpoints on it. CPC emphasized on national security and stability in that era, and the individual human rights weren’t taken correspondingly and reasonably account of. The death penalty was abused and deemed as an important measure to control crime. In post-Deng, human rights were gradually developed and recognized. The term of ‘state respect and protect human rights’ is contained in Constitution of P.R.C., and the individual human rights are gradually valued, but the CPC still focus on state security, development, and stability, the individual right to life hasn’t been enough valued like the right to substance. Although the steps of reforming death penalty are taking, there are still 46 crimes punishable by death. CPC should change its outlook and pay more attention to the right to life, and try to abolish death penalty de facto and de jure.

Keywords: criminal law, communist party of China, death penalty, human rights, China

Procedia PDF Downloads 411
11155 On Early Verb Acquisition in Chinese-Speaking Children

Authors: Yating Mu

Abstract:

Young children acquire native language with amazing rapidity. After noticing this interesting phenomenon, lots of linguistics, as well as psychologists, devote themselves to exploring the best explanations. Thus researches on first language acquisition emerged. Early lexical development is an important branch of children’s FLA (first language acquisition). Verb, the most significant class of lexicon, the most grammatically complex syntactic category or word type, is not only the core of exploring syntactic structures of language but also plays a key role in analyzing semantic features. Obviously, early verb development must have great impacts on children’s early lexical acquisition. Most scholars conclude that verbs, in general, are very difficult to learn because the problem in verb learning might be more about mapping a specific verb onto an action or event than about learning the underlying relational concepts that the verb or relational term encodes. However, the previous researches on early verb development mainly focus on the argument about whether there is a noun-bias or verb-bias in children’s early productive vocabulary. There are few researches on general characteristics of children’s early verbs concerning both semantic and syntactic aspects, not mentioning a general survey on Chinese-speaking children’s verb acquisition. Therefore, the author attempts to examine the general conditions and characteristics of Chinese-speaking children’s early productive verbs, based on data from a longitudinal study on three Chinese-speaking children. In order to present an overall picture of Chinese verb development, both semantic and syntactic aspects will be focused in the present study. As for semantic analysis, a classification method is adopted first. Verb category is a sophisticated class in Mandarin, so it is quite necessary to divide it into small sub-types, thus making the research much easier. By making a reasonable classification of eight verb classes on basis of semantic features, the research aims at finding out whether there exist any universal rules in Chinese-speaking children’s verb development. With regard to the syntactic aspect of verb category, a debate between nativist account and usage-based approach has lasted for quite a long time. By analyzing the longitudinal Mandarin data, the author attempts to find out whether the usage-based theory can fully explain characteristics in Chinese verb development. To sum up, this thesis attempts to apply the descriptive research method to investigate the acquisition and the usage of Chinese-speaking children’s early verbs, on purpose of providing a new perspective in investigating semantic and syntactic features of early verb acquisition.

Keywords: Chinese-speaking children, early verb acquisition, verb classes, verb grammatical structures

Procedia PDF Downloads 359
11154 Metabolic Pathway Analysis of Microbes using the Artificial Bee Colony Algorithm

Authors: Serena Gomez, Raeesa Tanseen, Netra Shaligram, Nithin Francis, Sandesh B. J.

Abstract:

The human gut consists of a community of microbes which has a lot of effects on human health disease. Metabolic modeling can help to predict relative populations of stable microbes and their effect on health disease. In order to study and visualize microbes in the human gut, we developed a tool that offers the following modules: Build a tool that can be used to perform Flux Balance Analysis for microbes in the human gut using the Artificial Bee Colony optimization algorithm. Run simulations for an individual microbe in different conditions, such as aerobic and anaerobic and visualize the results of these simulations.

Keywords: microbes, metabolic modeling, flux balance analysis, artificial bee colony

Procedia PDF Downloads 93
11153 Research Action Fields at the Nexus of Digital Transformation and Supply Chain Management: Findings from Practitioner Focus Group Workshops

Authors: Brandtner Patrick, Staberhofer Franz

Abstract:

Logistics and Supply Chain Management are of crucial importance for organisational success. In the era of Digitalization, several implications and improvement potentials for these domains arise, which at the same time could lead to decreased competitiveness and could endanger long-term company success if ignored or neglected. However, empirical research on the issue of Digitalization and benefits purported to it by practitioners is scarce and mainly focused on single technologies or separate, isolated Supply Chain blocks as e.g. distribution logistics or procurement only. The current paper applies a holistic focus group approach to elaborate practitioner use cases at the nexus of the concepts of Supply Chain Management (SCM) and Digitalization. In the course of three focus group workshops with over 45 participants from more than 20 organisations, a comprehensive set of benefit entitlements and areas for improvement in terms of applying digitalization to SCM is developed. The main results of the paper indicate the relevance of Digitalization being realized in practice. In the form of seventeen concrete research action fields, the benefit entitlements are aggregated and transformed into potential starting points for future research projects in this area. The main contribution of this paper is an empirically grounded basis for future research projects and an overview of actual research action fields from practitioners’ point of view.

Keywords: digital supply chain, digital transformation, supply chain management, value networks

Procedia PDF Downloads 165
11152 Classification Based on Deep Neural Cellular Automata Model

Authors: Yasser F. Hassan

Abstract:

Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it.

Keywords: cellular automata, neural cellular automata, deep learning, classification

Procedia PDF Downloads 186
11151 Design of Demand Pacemaker Using an Embedded Controller

Authors: C. Bala Prashanth Reddy, B. Abhinay, C. Sreekar, D. V. Shobhana Priscilla

Abstract:

The project aims in designing an emergency pacemaker which is capable of giving shocks to a human heart which has stopped working suddenly. A pacemaker is a machine commonly used by cardiologists. This machine is used in order to shock a human’s heart back into usage. The way the heart works is that there are small cells called pacemakers sending electrical pulses to cardiac muscles that tell the heart when to pump blood. When these electrical pulses stop, the heart stops beating. When this happens, a pacemaker is used to shock the heart muscles and the pacemakers back into action. The way this is achieved is by rubbing the two panels of the pacemaker together to create an adequate electrical current, and then the heart gets back to the normal state. The project aims in designing a system which is capable of continuously displaying the heart beat and blood pressure of a person on LCD. The concerned doctor gets the heart beat and also the blood pressure details continuously through the GSM Modem in the form of SMS alerts. In case of abnormal condition, the doctor sends message format regarding the amount of electric shock needed. Automatically the microcontroller gives the input to the pacemaker which in turn gives the shock to the patient. Heart beat monitor and display system is a portable and a best replacement for the old model stethoscope which is less efficient. The heart beat rate is calculated manually using stethoscope where the probability of error is high because the heart beat rate lies in the range of 70 to 90 per minute whose occurrence is less than 1 sec, so this device can be considered as a very good alternative instead of a stethoscope.

Keywords: missing R wave, PWM, demand pacemaker, heart

Procedia PDF Downloads 467
11150 Torture and Turkey: Legal Situation Related to Torture in Turkey and the Issue of Impunity of Torture

Authors: Zeynep Üskül Engin

Abstract:

Looking upon the world’s history, one can easily understand that the most drastic and evil comes to the human from his own kind. Human, proving that Hobbs was actually right, finally have agreed on taking some necessary measures after the destructive effects of the great World Wars. Surely after this, human rights have been more commonly mentioned in written form and now the priority of the values and goals of a democratic society is to protect its individuals. Due to this fact, the right of living is found to be valuable and all the existing forms of torture, anti-human and humiliating activities have been banned. Turkey, having signed the international papers of human rights, has aimed for eliminating torture through changing its laws and regulations to a certain extent. Monitoring Turkey’s experience, it is likely to say that during certain periods of time systematic torture has been applied. The urge to enter the European Union and verdicts against Turkey, have led to considerable progress in human rights. Besides, changes in law and the comprehensive training for the police, judges, medical and prison staff have resulted in positive improvement related to this issue. Certainly, this current legal update does not completely mean the total elimination of the practice of torture; however, in the commitment of this crime, the ones who have committed are standing a trial and facing severe punishments. In this article, Turkey, with a notorious reputation in international arena is going to be examined through its policy towards torture and defects in practice.

Keywords: torture, human rights, impunity of torture, sociology

Procedia PDF Downloads 457
11149 A Combination of Independent Component Analysis, Relative Wavelet Energy and Support Vector Machine for Mental State Classification

Authors: Nguyen The Hoang Anh, Tran Huy Hoang, Vu Tat Thang, T. T. Quyen Bui

Abstract:

Mental state classification is an important step for realizing a control system based on electroencephalography (EEG) signals which could benefit a lot of paralyzed people including the locked-in or Amyotrophic Lateral Sclerosis. Considering that EEG signals are nonstationary and often contaminated by various types of artifacts, classifying thoughts into correct mental states is not a trivial problem. In this work, our contribution is that we present and realize a novel model which integrates different techniques: Independent component analysis (ICA), relative wavelet energy, and support vector machine (SVM) for the same task. We applied our model to classify thoughts in two types of experiment whether with two or three mental states. The experimental results show that the presented model outperforms other models using Artificial Neural Network, K-Nearest Neighbors, etc.

Keywords: EEG, ICA, SVM, wavelet

Procedia PDF Downloads 378
11148 Foot Recognition Using Deep Learning for Knee Rehabilitation

Authors: Rakkrit Duangsoithong, Jermphiphut Jaruenpunyasak, Alba Garcia

Abstract:

The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.

Keywords: foot recognition, deep learning, knee rehabilitation, convolutional neural network

Procedia PDF Downloads 154
11147 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 76