Search results for: lean literature classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8891

Search results for: lean literature classification

8201 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: decision tree, genetic algorithm, machine learning, software defect prediction

Procedia PDF Downloads 314
8200 American Slavery and the Consciousness of Play

Authors: Janaka B. Lewis

Abstract:

“Narratives of Slavery and the Culture of Play” examines how play is discussed in early African American literature by both men and women to illustrate ways that they negotiated the hierarchy and oppression of enslavement. Reading narratives categorized as “slave narratives,” including those written by Frederick Douglass, Harriet Jacobs, and Olaudah Equiano, through the lens of play theory offers an illuminated analysis of the significance of play culture in these texts. It then reads late nineteenth-century play culture (or absence thereof) portrayed in literature as a lens for more contemporary African American oral and literary culture. These discussions of social constructions through literature bridge analyses of African American-authored texts and create a larger conversation about print media as a tool of activism and resistance. This essay also contributes to a larger body of analysis of nineteenth-century African American culture through literature.

Keywords: childhood, slavery, consciousness of play, 19th century African American culture

Procedia PDF Downloads 483
8199 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs

Authors: Dingyang Hu, Dan Liu

Abstract:

DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.

Keywords: adversarial sample, gradient, probability, black box

Procedia PDF Downloads 74
8198 Development and Validation of Integrated Continuous Improvement Framework for Competitiveness: Mixed Research of Ethiopian Manufacturing Industries

Authors: Haftu Hailu Berhe, Hailekiros Sibhato Gebremichael, Kinfe Tsegay Beyene, Haileselassie Mehari

Abstract:

The purpose of the study is to develop and validate integrated literature-based JIT, TQM, TPM, SCM and LSS framework through a combination of the PDCA cycle and DMAIC methodology. The study adopted a mixed research approach. Accordingly, the qualitative study employed to develop the framework is based on identifying the uniqueness and common practices of JIT, TQM, TPM, SCM and LSS initiatives, the existing practice of the integration, identifying the existing gaps in the framework and practices, developing new integrated JIT, TQM, TPM, SCM and LSS practice framework. Previous very few studies of the uniqueness and common practices of the five initiatives are preserved. Whereas the quantitative study working to validate the framework is based on empirical analysis of the self-administered questionnaire using a statistical package for social science. A combination of the PDCA cycle and DMAIC methodology stand integrated CI framework is developed. The proposed framework is constructed as a project-based framework with five detailed implementation phases. Besides, the empirical analysis demonstrated that the proposed framework is valuable if adopted and implemented correctly. So far, there is no study proposed & validated the integrated CI framework within the scope of the study. Therefore, this is the earliest study that proposed and validated the framework for manufacturing industries. The proposed framework is applicable to manufacturing industries and can assist in achieving competitive advantages when the manufacturing industries, institutions and government offer unconditional efforts in implementing the full contents of the framework.

Keywords: integrated continuous improvement framework, just in time, total quality management, total productive maintenance, supply chain management, lean six sigma

Procedia PDF Downloads 112
8197 Identity Verification Using k-NN Classifiers and Autistic Genetic Data

Authors: Fuad M. Alkoot

Abstract:

DNA data have been used in forensics for decades. However, current research looks at using the DNA as a biometric identity verification modality. The goal is to improve the speed of identification. We aim at using gene data that was initially used for autism detection to find if and how accurate is this data for identification applications. Mainly our goal is to find if our data preprocessing technique yields data useful as a biometric identification tool. We experiment with using the nearest neighbor classifier to identify subjects. Results show that optimal classification rate is achieved when the test set is corrupted by normally distributed noise with zero mean and standard deviation of 1. The classification rate is close to optimal at higher noise standard deviation reaching 3. This shows that the data can be used for identity verification with high accuracy using a simple classifier such as the k-nearest neighbor (k-NN). 

Keywords: biometrics, genetic data, identity verification, k nearest neighbor

Procedia PDF Downloads 234
8196 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 606
8195 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 41
8194 A Case-Based Reasoning-Decision Tree Hybrid System for Stock Selection

Authors: Yaojun Wang, Yaoqing Wang

Abstract:

Stock selection is an important decision-making problem. Many machine learning and data mining technologies are employed to build automatic stock-selection system. A profitable stock-selection system should consider the stock’s investment value and the market timing. In this paper, we present a hybrid system including both engage for stock selection. This system uses a case-based reasoning (CBR) model to execute the stock classification, uses a decision-tree model to help with market timing and stock selection. The experiments show that the performance of this hybrid system is better than that of other techniques regarding to the classification accuracy, the average return and the Sharpe ratio.

Keywords: case-based reasoning, decision tree, stock selection, machine learning

Procedia PDF Downloads 392
8193 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 159
8192 Management of Interdependence in Manufacturing Networks

Authors: Atour Taghipour

Abstract:

In the real world each manufacturing company is an independent business unit. These business units are linked to each other through upstream and downstream linkages. The management of these linkages is called coordination which, could be considered as a difficult engineering task. The degree of difficulty of coordination depends on the type and the nature of information exchanged between partners as well as the structure of relationship from mutual to the network structure. The literature of manufacturing systems comprises a wide range of varieties of methods and approaches of coordination. In fact, two main streams of research can be distinguished: central coordination versus decentralized coordination. In the centralized systems a high degree of information exchanges is required. The high degree of information exchanges sometimes leads to difficulties when independent members do not want to share information. In order to address these difficulties, decentralized approaches of coordination of operations planning decisions based on some minimal information sharing have been proposed in many academic disciplines. This paper first proposes a framework of analysis in order to analyze the proposed approaches in the literature, based on this framework which includes the similarities between approaches we categorize the existing approaches. This classification can be used as a research map for future researches. The result of our paper highlights several opportunities for future research. First, it is proposed to develop more dynamic and stochastic mechanisms of planning coordination of manufacturing units. Second, in order to exploit the complementarities of approaches proposed by diverse science discipline, we propose to integrate the techniques of coordination. Finally, based on our approach we proposed to develop coordination standards to guaranty both the complementarity of these approaches as well as the freedom of companies to adopt any planning tools.

Keywords: network coordination, manufacturing, operations planning, supply chain

Procedia PDF Downloads 262
8191 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 229
8190 Classification of Business Models of Italian Bancassurance by Balance Sheet Indicators

Authors: Andrea Bellucci, Martina Tofi

Abstract:

The aim of paper is to analyze business models of bancassurance in Italy for life business. The life insurance business is very developed in the Italian market and banks branches have 80% of the market share. Given its maturity, the life insurance market needs to consolidate its organizational form to allow for the development of non-life business, which nowadays collects few premiums but represents a great opportunity to enlarge the market share of bancassurance using its strength in the distribution channel while the market share of independent agents is decreasing. Starting with the main business model of bancassurance for life business, this paper will analyze the performances of life companies in the Italian market by balance sheet indicators and by main discriminant variables of business models. The study will observe trends from 2013 to 2015 for the Italian market by exploiting a database managed by Associazione Nazionale delle Imprese di Assicurazione (ANIA). The applied approach is based on a bottom-up analysis starting with variables and indicators to define business models’ classification. The statistical classification algorithm proposed by Ward is employed to design business models’ profiles. Results from the analysis will be a representation of the main business models built by their profile related to indicators. In that way, an unsupervised analysis is developed that has the limit of its judgmental dimension based on research opinion, but it is possible to obtain a design of effective business models.

Keywords: bancassurance, business model, non life bancassurance, insurance business value drivers

Procedia PDF Downloads 280
8189 Tick Induced Facial Nerve Paresis: A Narrative Review

Authors: Jemma Porrett

Abstract:

Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation.

Keywords: facial nerve palsy, tick bite, intra-aural, Australia

Procedia PDF Downloads 86
8188 Comparative Literature, Postcolonialism and the “African World” in Wole Soyinka’s Myth, Literature and the African World

Authors: Karen de Andrade

Abstract:

Literature is generally understood as an aesthetic creation, an artistic object that relates to the history and sociocultural paradigms of a given era. Moreover, through it, we can dwell on the deepest reflections on the human condition. It can also be used to propagate projects of domination, as Edward Said points out in his book Culture and Imperialism, connecting narrative, history and land conquest. Having said that, the aim of this paper is to analyse how Wole Soyinka elaborated his main theoretical work, Myth, Literature and African World, a collection of essays published in 1976, by comparing the philosophical, ideological and aesthetic practices of African, diasporic and European writers from the point of view of the Yoruba tradition, to which he belongs. Moreover, Soyinka believes that (literary) art has an important function in the formation of a people, in the construction of its political identity and in cultural regeneration, especially after the independence. The author's critical endeavour is that of attempting to construct a past. For him, the "African World" is not a mere allegory of the continent, and to understand it in this way would be to perpetuate a colonialist vision that would deny the subjectivities that cross black cultures, history and bodies. For him, comparative literature can be used not to "equate" local African texts with the European canon; but rather to recognise that they have aesthetic value and socio-cultural importance. Looking at the local, the particular and specific to each culture is, according to Soyinka, appropriate for dealing with African cultures, as opposed to abstractions of dialectical materialism or capitalist nationalism. In view of this, in his essays, the author creates a possibility for artistic and social reflection beyond the logic of Western politics.

Keywords: comparative literature, African Literature, Literary Theory, Yoruba Mythology, Wole Soyinka, Afrodiaspora

Procedia PDF Downloads 46
8187 Comparison of Machine Learning and Deep Learning Algorithms for Automatic Classification of 80 Different Pollen Species

Authors: Endrick Barnacin, Jean-Luc Henry, Jimmy Nagau, Jack Molinie

Abstract:

Palynology is a field of interest in many disciplines due to its multiple applications: chronological dating, climatology, allergy treatment, and honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time consuming task that requires the intervention of experts in the field, which are becoming increasingly rare due to economic and social conditions. That is why the need for automation of this task is urgent. A lot of studies have investigated the subject using different standard image processing descriptors and sometimes hand-crafted ones.In this work, we make a comparative study between classical feature extraction methods (Shape, GLCM, LBP, and others) and Deep Learning (CNN, Autoencoders, Transfer Learning) to perform a recognition task over 80 regional pollen species. It has been found that the use of Transfer Learning seems to be more precise than the other approaches

Keywords: pollens identification, features extraction, pollens classification, automated palynology

Procedia PDF Downloads 119
8186 ANFIS Approach for Locating Faults in Underground Cables

Authors: Magdy B. Eteiba, Wael Ismael Wahba, Shimaa Barakat

Abstract:

This paper presents a fault identification, classification and fault location estimation method based on Discrete Wavelet Transform and Adaptive Network Fuzzy Inference System (ANFIS) for medium voltage cable in the distribution system. Different faults and locations are simulated by ATP/EMTP, and then certain selected features of the wavelet transformed signals are used as an input for a training process on the ANFIS. Then an accurate fault classifier and locator algorithm was designed, trained and tested using current samples only. The results obtained from ANFIS output were compared with the real output. From the results, it was found that the percentage error between ANFIS output and real output is less than three percent. Hence, it can be concluded that the proposed technique is able to offer high accuracy in both of the fault classification and fault location.

Keywords: ANFIS, fault location, underground cable, wavelet transform

Procedia PDF Downloads 488
8185 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 64
8184 Does Creatine Supplementation Improve Swimming Performance?

Authors: Catrin Morgan, Atholl Johnston

Abstract:

Creatine supplementation should theoretically increase total muscle creatine and so enhance the generation of intramuscular phosphocreatine and subsequent ATP formation. The use of creatine as a potential ergogenic aid in sport has been an area of significant scientific research for a number of years. However the effect of creatine supplementation and swimming performance is a relatively new area of research and is the subject of this review. In swimming creatine supplementation could help maintain maximal power output, aid recovery and increase lean body mass. After investigating the underlying theory and science behind creatine supplementation, a literature review was conducted to identify the best evidence looking at the effect of creatine supplementation on swimming performance. The search identified 27 potential studies, and of these 17 were selected for review. The studies were then categorised into single sprint performance, which involves swimming a short distance race, or repeated interval performance, which involves swimming a series of sprints with intervals of rest between them. None of the studies on the effect of creatine controlled for the multiple confounding factors associated with measurement of swimming performance. The sample size in the studies was limited and this reduced the reliability of the studies and introduced the possibility of bias. The studies reviewed provided insufficient evidence to determine if creatine supplementation is beneficial to swimming performance. However, what data there was supported the use of creatine supplementation in repeated interval swimming rather than in single sprint swimming. From a review of the studies, it was calculated on average, there was a 1.37% increase in swimming performance with the use of creatine for repeated intervals and a 0.86% increase in performance for single sprint. While this may seem minor, it should be remembered that swimming races are often won by much smaller margins. In the 2012 London Olympics the Men’s 100 metres freestyle race was won by a margin of only 0.01 of a second. Therefore any potential benefit could make a dramatic difference to the final outcome of the race. Overall more research is warranted before the benefits of creatine supplementation in swimming performance can be further clarified.

Keywords: creatine supplementation, repeated interval, single sprint, swimming performance

Procedia PDF Downloads 406
8183 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 326
8182 Journals' Productivity in the Literature on Malaria in Africa

Authors: Yahya Ibrahim Harande

Abstract:

The purpose of this study was to identify the journals that published articles on malaria disease in Africa and to determine the core of productive journals from the identified journals. The data for the study were culled out from African Index Medicus (AIM) database. A total of 529 articles was gathered from 115 journal titles from 1979-2011. In order to obtain the core of productive journals, Bradford`s law was applied to the collected data. Five journal titles were identified and determined as core journals. The data used for the study was analyzed and that, the subject matter used, Malaria was in conformity with the Bradford`s law. On the aspect dispersion of the literature, English was found to be the dominant language of the journals. (80.9%) followed by French (16.5%). Followed by Portuguese (1.7%) and German (0.9%). Recommendation is hereby proposed for the medical libraries to acquire these five journals that constitute the core in malaria literature for the use of their clients. It could also help in streamlining their acquision and selection exercises. More researches in the subject area using Bibliometrics approaches are hereby recommended.

Keywords: productive journals, malaria disease literature, Bradford`s law, core journals, African scholars

Procedia PDF Downloads 327
8181 The Role of Cyfra 21-1 in Diagnosing Non Small Cell Lung Cancer (NSCLC)

Authors: H. J. T. Kevin Mozes, Dyah Purnamasari

Abstract:

Background: Lung cancer accounted for the fourth most common cancer in Indonesia. 85% of all lung cancer cases are the Non-Small Cell Lung Cancer (NSCLC). The indistinct signs and symptoms of NSCLC sometimes lead to misdiagnosis. The gold standard assessment for the diagnosis of NSCLC is the histopathological biopsy, which is invasive. Cyfra 21-1 is a tumor marker, which can be found in the intermediate protein structure in the epitel. The accuracy of Cyfra 21-1 in diagnosing NSCLC is not yet known, so this report is made to seek the answer for the question above. Methods: Literature searching is done using online databases. Proquest and Pubmed are online databases being used in this report. Then, literature selection is done by excluding and including based on inclusion criterias and exclusion criterias. The selected literature is then being appraised using the criteria of validity, importance, and validity. Results: From six journals appraised, five of them are valid. Sensitivity value acquired from all five literature is ranging from 50-84.5 %, meanwhile the specificity is 87.8 %-94.4 %. Likelihood the ratio of all appraised literature is ranging from 5.09 -10.54, which categorized to Intermediate High. Conclusion: Serum Cyfra 21-1 is a sensitive and very specific tumor marker for diagnosis of non-small cell lung cancer (NSCLC).

Keywords: cyfra 21-1, diagnosis, nonsmall cell lung cancer, NSCLC, tumor marker

Procedia PDF Downloads 218
8180 An Analysis of Classification of Imbalanced Datasets by Using Synthetic Minority Over-Sampling Technique

Authors: Ghada A. Alfattni

Abstract:

Analysing unbalanced datasets is one of the challenges that practitioners in machine learning field face. However, many researches have been carried out to determine the effectiveness of the use of the synthetic minority over-sampling technique (SMOTE) to address this issue. The aim of this study was therefore to compare the effectiveness of the SMOTE over different models on unbalanced datasets. Three classification models (Logistic Regression, Support Vector Machine and Nearest Neighbour) were tested with multiple datasets, then the same datasets were oversampled by using SMOTE and applied again to the three models to compare the differences in the performances. Results of experiments show that the highest number of nearest neighbours gives lower values of error rates. 

Keywords: imbalanced datasets, SMOTE, machine learning, logistic regression, support vector machine, nearest neighbour

Procedia PDF Downloads 324
8179 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: consensus, curse of correlation, imbalance classification, rank-based chain-mode ensemble

Procedia PDF Downloads 117
8178 Data Science in Military Decision-Making: A Semi-Systematic Literature Review

Authors: H. W. Meerveld, R. H. A. Lindelauf

Abstract:

In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.

Keywords: data science, decision-making, information superiority, literature review, military

Procedia PDF Downloads 143
8177 Attention Multiple Instance Learning for Cancer Tissue Classification in Digital Histopathology Images

Authors: Afaf Alharbi, Qianni Zhang

Abstract:

The identification of malignant tissue in histopathological slides holds significant importance in both clinical settings and pathology research. This paper introduces a methodology aimed at automatically categorizing cancerous tissue through the utilization of a multiple-instance learning framework. This framework is specifically developed to acquire knowledge of the Bernoulli distribution of the bag label probability by employing neural networks. Furthermore, we put forward a neural network based permutation-invariant aggregation operator, equivalent to attention mechanisms, which is applied to the multi-instance learning network. Through empirical evaluation of an openly available colon cancer histopathology dataset, we provide evidence that our approach surpasses various conventional deep learning methods.

Keywords: attention multiple instance learning, MIL and transfer learning, histopathological slides, cancer tissue classification

Procedia PDF Downloads 86
8176 Classification Based on Deep Neural Cellular Automata Model

Authors: Yasser F. Hassan

Abstract:

Deep learning structure is a branch of machine learning science and greet achievement in research and applications. Cellular neural networks are regarded as array of nonlinear analog processors called cells connected in a way allowing parallel computations. The paper discusses how to use deep learning structure for representing neural cellular automata model. The proposed learning technique in cellular automata model will be examined from structure of deep learning. A deep automata neural cellular system modifies each neuron based on the behavior of the individual and its decision as a result of multi-level deep structure learning. The paper will present the architecture of the model and the results of simulation of approach are given. Results from the implementation enrich deep neural cellular automata system and shed a light on concept formulation of the model and the learning in it.

Keywords: cellular automata, neural cellular automata, deep learning, classification

Procedia PDF Downloads 169
8175 Between Fiction and Reality: Reading the Silences in Partition History

Authors: Shazia Salam

Abstract:

This paper focuses on studying the literary reactions of selected Muslim women writers to the event of Partition of India in the north western region. It aims to explore how Muslim women experienced the Partition and how that experience was articulated through their writing. There is a serious dearth of research on the experience of Muslim women who had to witness the momentous event of the subcontinent. Since scholars have often questioned the silence around the historiography related to the experiences of Muslim women, this paper aims to explore if literature could provide insights that may be less readily available in other modes of narration. Using literature as an archival source, it aims to delve into the arenas of history that have been cloistered and closed. Muslim women have been silent about their experiences of Partition which at the cost of essentializing could be attributed to patriarchal constraints, and taboos, on speaking of intimate matters. These silences have consigned the question of their experience to a realm of anonymity. The lack of ethnographic research has in a way been compensated in the realm of literature, mainly poetry and fiction. Besides reportage, literature remains an important source of social history about Partition and how Muslim women lived through it. Where traditional history fails to record moments of rupture and dislocation, literature serves the crucial purpose. The central premise in this paper is that there is a need to revise the history of partition owing to the gaps in historiography. It looks into if literature can serve as a ground for developing new approaches to history since the question of the representation always confronts us--between what a text represents and how it represents it since imagination of the writer plays a great role in the construction of any text. With this approach as an entry point, this paper aims to unpack the questions of representation, the coalescing of history /literature and the gendered nature of partition history. It concludes that the gaps in the narratives of Partition and the memory of Partition can be addressed by way of suing literary as a source to fill in the cracks and fissures.

Keywords: gender, history, literature, partition

Procedia PDF Downloads 189
8174 Digital Humanities in The US/Mexico Borderlands: Activism, Literature, and Border Crossers

Authors: Martin Camps

Abstract:

The two-thousand-mile border that divides the United States and Mexico is a “contact zone” of cultural friction and unbalanced power relations as defined by Mary Louise Pratt. The interest of this paper is to analyze digital platforms created to address the study and comprehension of the borderlands with pedagogical and research reasons. The paper explores ways to engage students in archival and analytical practices to build a repository of resources, links, and digital tools and consider how to adapt them to the study of the borderlands. Sites such as “Torn Apart / Separados,” “Digital Borderlands,” “Borderlands Archives Cartography,” and “Juaritos Literario” show visualizations, mapping, and access to materials and marginal literature on the border phenomenon. Analyzing these projects contributes to highlighting digital projects and the study of the border and how to engage in activism via the study of literature and the representation of a human tragedy that underscores the divisions and biopolitics imposed on the Global South and imagine the digital border futures.

Keywords: borderlands, digital humanities, activism, border literature

Procedia PDF Downloads 62
8173 Rendering of Indian History: A Study Based on Select Graphic Novels

Authors: Akhila Sara Varughese

Abstract:

In the postmodern society, visual narratives became an emerging genre in the field of literature. Graphic literature focuses on the literal and symbolic layer of interpretation. The most salient feature of graphic literature is its exploration of the public history of events and life narratives. The Indian graphic literature re-interprets the canon, style and the form of texts in Indian Writing in English and it demands a new literacy and the structure of the English literature. With the help of visual-verbal language, the graphic narratives discuss various facets of contemporary India. Graphic novels have firmly identified itself with the art of storytelling because of its capability of expressing human experiences to the most. In the textual novels, the author usually deserts the imagination of the readers, but in the case of graphic narratives, due to the presence of visual elements, the interpretation becomes simpler. India is the second most populous country in the world with a long tradition of history and culture. Indian literature always tries to reconstruct Indian history in various modes of representation. The present paper focuses on the fictional articulation of Indian history through the graphic narratives and analyses how some historical events in India portrays. The paper also traces the differences in rendering the history in graphic novels with that of textual novels. The paper discusses how much the blending of words and images helps in represent the Indian history by analyzing the graphic novels like Kashmir Pending by Naseer Ahmed, Delhi Calm by Vishwajyoti Ghosh and Munnu by Malik Sajad.

Keywords: graphic novels, Indian history, representation, visual-verbal literacy

Procedia PDF Downloads 318
8172 A Combination of Independent Component Analysis, Relative Wavelet Energy and Support Vector Machine for Mental State Classification

Authors: Nguyen The Hoang Anh, Tran Huy Hoang, Vu Tat Thang, T. T. Quyen Bui

Abstract:

Mental state classification is an important step for realizing a control system based on electroencephalography (EEG) signals which could benefit a lot of paralyzed people including the locked-in or Amyotrophic Lateral Sclerosis. Considering that EEG signals are nonstationary and often contaminated by various types of artifacts, classifying thoughts into correct mental states is not a trivial problem. In this work, our contribution is that we present and realize a novel model which integrates different techniques: Independent component analysis (ICA), relative wavelet energy, and support vector machine (SVM) for the same task. We applied our model to classify thoughts in two types of experiment whether with two or three mental states. The experimental results show that the presented model outperforms other models using Artificial Neural Network, K-Nearest Neighbors, etc.

Keywords: EEG, ICA, SVM, wavelet

Procedia PDF Downloads 360