Search results for: Naive Bayes classifier
67 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 14666 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 9465 Using Machine Learning to Classify Different Body Parts and Determine Healthiness
Authors: Zachary Pan
Abstract:
Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.Keywords: body part, healthcare, machine learning, neural networks
Procedia PDF Downloads 10364 A Dimensional Approach to Family Involvement in Forensic Mental Health Settings - Prevention of the Systemic Replication of Abuse, Need for Accepted Falsehoods and Family Guilt and Shame
Authors: Katie E. Jennings
Abstract:
The interactions between family dynamics and environmental factors with mental health vulnerability in individuals are well known and are a theme for on-going research and debate. The impact upon mental health issues and forensic issues on family dynamics, experience, and emotional wellbeing cannot be over-Emphasised. For forensic patients with diagnosed mental disorders, these relationships and environments may have also been functionally linked to the development and maintenance of those disorders; with significant adverse childhood experiences being a common feature of many Patient’s histories. Mental health hospitals remove the patient from their home environments and provide treatment outside of these relationships and often outside of the home area. There is, therefore, a major focus on Services ensuring that patients are able to build and maintain relationships with family and friends, requiring services to involve families in Patients' care and treatment wherever possible. There are standards set by Government and clinical bodies that require absolute demonstration of the inclusion of family and friends in all aspects of the care and treatment of forensic patients. For some patients and family members, this push to take on a “role” in care can be unhelpful, extremely stressful, and has constant implications for the potential delicate reparation of relationships. Based on work undertaken for over 20 years in forensic mental health settings, this paper explores the positive psychology approach to a dimensional model to family inclusion in mental health care that learns from family court work and allows for the maintenance of relationships to be at both proximal and Distil levels; to prevent the replication of abuse, decrease the need for falsehoods and assist the recovery of all. The model is based on allowing families to choose to not be involved or be involved in different ways if this is seen to be more helpful. It also allows patients to choose the level of potential involvement that they would find helpful, and for this to be reviewed at a timeframe agreed by all parties, rather than when the next survey is due or the patient has a significant care meeting. This paper is significant as there is a lack of research to support services to use a positive psychology approach to work in this area, the assumption that being asked to be involved must be positive for all seems naïve at best for this patient group. Work relating to the psychology of family can significantly contribute to the development of knowledge in this area. The development of a dimensional model will support choice within families and assist in the development of more honest and open relationships.Keywords: family dynamics, forensic, mental disorder, positive psychology
Procedia PDF Downloads 14863 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 17062 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 33961 Sound Analysis of Young Broilers Reared under Different Stocking Densities in Intensive Poultry Farming
Authors: Xiaoyang Zhao, Kaiying Wang
Abstract:
The choice of stocking density in poultry farming is a potential way for determining welfare level of poultry. However, it is difficult to measure stocking densities in poultry farming because of a lot of variables such as species, age and weight, feeding way, house structure and geographical location in different broiler houses. A method was proposed in this paper to measure the differences of young broilers reared under different stocking densities by sound analysis. Vocalisations of broilers were recorded and analysed under different stocking densities to identify the relationship between sounds and stocking densities. Recordings were made continuously for three-week-old chickens in order to evaluate the variation of sounds emitted by the animals at the beginning. The experimental trial was carried out in an indoor reared broiler farm; the audio recording procedures lasted for 5 days. Broilers were divided into 5 groups, stocking density treatments were 8/m², 10/m², 12/m² (96birds/pen), 14/m² and 16/m², all conditions including ventilation and feed conditions were kept same except from stocking densities in every group. The recordings and analysis of sounds of chickens were made noninvasively. Sound recordings were manually analysed and labelled using sound analysis software: GoldWave Digital Audio Editor. After sound acquisition process, the Mel Frequency Cepstrum Coefficients (MFCC) was extracted from sound data, and the Support Vector Machine (SVM) was used as an early detector and classifier. This preliminary study, conducted in an indoor reared broiler farm shows that this method can be used to classify sounds of chickens under different densities economically (only a cheap microphone and recorder can be used), the classification accuracy is 85.7%. This method can predict the optimum stocking density of broilers with the complement of animal welfare indicators, animal productive indicators and so on.Keywords: broiler, stocking density, poultry farming, sound monitoring, Mel Frequency Cepstrum Coefficients (MFCC), Support Vector Machine (SVM)
Procedia PDF Downloads 16160 Lithuanian Sign Language Literature: Metaphors at the Phonological Level
Authors: Anželika Teresė
Abstract:
In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics
Procedia PDF Downloads 13659 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 11258 Electroencephalography Correlates of Memorability While Viewing Advertising Content
Authors: Victor N. Anisimov, Igor E. Serov, Ksenia M. Kolkova, Natalia V. Galkina
Abstract:
The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing.Keywords: memory, commercials, neuromarketing, EEG, branding
Procedia PDF Downloads 25157 Identification and Classification of Medicinal Plants of Indian Himalayan Region Using Hyperspectral Remote Sensing and Machine Learning Techniques
Authors: Kishor Chandra Kandpal, Amit Kumar
Abstract:
The Indian Himalaya region harbours approximately 1748 plants of medicinal importance, and as per International Union for Conservation of Nature (IUCN), the 112 plant species among these are threatened and endangered. To ease the pressure on these plants, the government of India is encouraging its in-situ cultivation. The Saussurea costus, Valeriana jatamansi, and Picrorhiza kurroa have also been prioritized for large scale cultivation owing to their market demand, conservation value and medicinal properties. These species are found from 1000 m to 4000 m elevation ranges in the Indian Himalaya. Identification of these plants in the field requires taxonomic skills, which is one of the major bottleneck in the conservation and management of these plants. In recent years, Hyperspectral remote sensing techniques have been precisely used for the discrimination of plant species with the help of their unique spectral signatures. In this background, a spectral library of the above 03 medicinal plants was prepared by collecting the spectral data using a handheld spectroradiometer (325 to 1075 nm) from farmer’s fields of Himachal Pradesh and Uttarakhand states of Indian Himalaya. The Random forest (RF) model was implied on the spectral data for the classification of the medicinal plants. The 80:20 standard split ratio was followed for training and validation of the RF model, which resulted in training accuracy of 84.39 % (kappa coefficient = 0.72) and testing accuracy of 85.29 % (kappa coefficient = 0.77). This RF classifier has identified green (555 to 598 nm), red (605 nm), and near-infrared (725 to 840 nm) wavelength regions suitable for the discrimination of these species. The findings of this study have provided a technique for rapid and onsite identification of the above medicinal plants in the field. This will also be a key input for the classification of hyperspectral remote sensing images for mapping of these species in farmer’s field on a regional scale. This is a pioneer study in the Indian Himalaya region for medicinal plants in which the applicability of hyperspectral remote sensing has been explored.Keywords: himalaya, hyperspectral remote sensing, machine learning; medicinal plants, random forests
Procedia PDF Downloads 20356 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 7455 Pattern Recognition Approach Based on Metabolite Profiling Using In vitro Cancer Cell Line
Authors: Amanina Iymia Jeffree, Reena Thriumani, Mohammad Iqbal Omar, Ammar Zakaria, Yumi Zuhanis Has-Yun Hashim, Ali Yeon Md Shakaff
Abstract:
Metabolite profiling is a strategy to be approached in the pattern recognition method focused on three types of cancer cell line that driving the most to death specifically lung, breast, and colon cancer. The purpose of this study was to discriminate the VOCs pattern among cancerous and control group based on metabolite profiling. The sampling was executed utilizing the cell culture technique. All culture flasks were incubated till 72 hours and data collection started after 24 hours. Every running sample took 24 minutes to be completed accordingly. The comparative metabolite patterns were identified by the implementation of headspace-solid phase micro-extraction (HS-SPME) sampling coupled with gas chromatography-mass spectrometry (GCMS). The optimizations of the main experimental variables such as oven temperature and time were evaluated by response surface methodology (RSM) to get the optimal condition. Volatiles were acknowledged through the National Institute of Standards and Technology (NIST) mass spectral database and retention time libraries. To improve the reliability of significance, it is of crucial importance to eliminate background noise which data from 3rd minutes to 17th minutes were selected for statistical analysis. Targeted metabolites, of which were annotated as known compounds with the peak area greater than 0.5 percent were highlighted and subsequently treated statistically. Volatiles produced contain hundreds to thousands of compounds; therefore, it will be optimized by chemometric analysis, such as principal component analysis (PCA) as a preliminary analysis before subjected to a pattern classifier for identification of VOC samples. The volatile organic compound profiling has shown to be significantly distinguished among cancerous and control group based on metabolite profiling.Keywords: in vitro cancer cell line, metabolite profiling, pattern recognition, volatile organic compounds
Procedia PDF Downloads 36554 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser
Authors: Edoardo Schinco
Abstract:
Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.Keywords: Althusser, enlightenment, Gramsci, ideology
Procedia PDF Downloads 19953 The Incorporation of Themes Related to Islandness in Tourism Branding among Cold-Water, Warm-Water, and Temperate-Water Islands
Authors: Susan C. Graham
Abstract:
Islands have a long established allure for travellers the world over. From earliest accounts of human history, travellers were drawn by the sense of islandness embodied by these destinations. The concept of islandness describes the essence of what makes islands unique relative to non-islands and extends beyond geographic interpretations by attempting to capture the specific sense of self-exhibited by islanders in relation to their connection to place. The themes most strongly associated with islandness include a) a strong connection to water as both the life blood and a physical barrier, b) a unique culture and robust arts community that is deeply linked to both the island and islanders, c) an appreciation of and for nature, d) a rich sense of history and tradition connected to the place, e) a sense of community and belonging that arose through shared triumphs and struggles, and f) a profound awareness of independence, separateness, and uniqueness derived from both physical and social experience. The island brand, like all brands, is a marketing tactic designed to succinctly express a specific value proposition in simplistic ways which might include a brand symbol, logo, slogan, or representation meant to distinguish one brand from another. If a value proposition is the identification of attributes that separate one brand from another by highlighting the brand’s uniqueness, then presumably island brands may, at least in part, emphasize islandness as part of the destination brand. Yet it may in naïve to expect all islands to brand themselves using similar themes when islands can differ so substantially in terms of population, geography, political climate, economy, culture, and history. Of particular interest is the increased focus on tourism among 'cold-water' islands. This paper will examine the incorporation of themes related to islandness in tourism branding among cold-water, warm-water, and temperate-water islands. The tourism logos of 83 islands were collected and assessed for the use of themes related to islandness, namely water, arts and culture, nature, history and tradition, community and belongingness, and independence, separateness, and uniqueness. The ratings for each theme related to islandness for each of the 83 island destinations were then analyzed to identify if differences exist between cold-water, warm-water, and temperate-water islands. A general consensus of what constitutes 'cold-water' destinations is lacking, therefore a water temperature of 15C was adopted using the guidelines from the National Center for Cold Water Safety. Among these 83 islands, the average high and average low water temperatures of 196 specific locations, including the capital, northern, and southern most points of each island, was recorded to determine if the location was a cold-water (average high and low below 15C), warm-water (average high and low above 15C), or temperate-water (average high above 15C and low below 15C) location.Keywords: branding, cold-water, islands, tourism
Procedia PDF Downloads 22452 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 5051 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults
Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura
Abstract:
The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing
Procedia PDF Downloads 28350 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 44549 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 12748 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 33747 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors
Procedia PDF Downloads 43446 Cognition in Context: Investigating the Impact of Persuasive Outcomes across Face-to-Face, Social Media and Virtual Reality Environments
Authors: Claire Tranter, Coral Dando
Abstract:
Gathering information from others is a fundamental goal for those concerned with investigating crime, and protecting national and international security. Persuading an individual to move from an opposing to converging viewpoint, and an understanding on the cognitive style behind this change can serve to increase understanding of traditional face-to-face interactions, as well as synthetic environments (SEs) often used for communication across varying geographical locations. SEs are growing in usage, and with this increase comes an increase in crime being undertaken online. Communication technologies can allow people to mask their real identities, supporting anonymous communication which can raise significant challenges for investigators when monitoring and managing these conversations inside SEs. To date, the psychological literature concerning how to maximise information-gain in SEs for real-world interviewing purposes is sparse, and as such this aspect of social cognition is not well understood. Here, we introduce an overview of a novel programme of PhD research which seeks to enhance understanding of cross-cultural and cross-gender communication in SEs for maximising information gain. Utilising a dyadic jury paradigm, participants interacted with a confederate who attempted to persuade them to the opposing verdict across three distinct environments: face-to-face, instant messaging, and a novel virtual reality environment utilising avatars. Participants discussed a criminal scenario, acting as a two-person (male; female) jury. Persuasion was manipulated by the confederate claiming an opposing viewpoint (guilty v. not guilty) to the naïve participants from the outset. Pre and post discussion data, and observational digital recordings (voice and video) of participant’ discussion performance was collected. Information regarding cognitive style was also collected to ascertain participants need for cognitive closure and biases towards jumping to conclusions. Findings revealed that individuals communicating via an avatar in a virtual reality environment reacted in a similar way, and thus equally persuasive, when compared to individuals communicating face-to-face. Anonymous instant messaging however created a resistance to persuasion in participants, with males showing a significant decline in persuasive outcomes compared to face to face. The findings reveal new insights particularly regarding the interplay of persuasion on gender and modality, with anonymous instant messaging enhancing resistance to persuasion attempts. This study illuminates how varying SE can support new theoretical and applied understandings of how judgments are formed and modified in response to advocacy.Keywords: applied cognition, persuasion, social media, virtual reality
Procedia PDF Downloads 14445 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 3244 Using the Smith-Waterman Algorithm to Extract Features in the Classification of Obesity Status
Authors: Rosa Figueroa, Christopher Flores
Abstract:
Text categorization is the problem of assigning a new document to a set of predetermined categories, on the basis of a training set of free-text data that contains documents whose category membership is known. To train a classification model, it is necessary to extract characteristics in the form of tokens that facilitate the learning and classification process. In text categorization, the feature extraction process involves the use of word sequences also known as N-grams. In general, it is expected that documents belonging to the same category share similar features. The Smith-Waterman (SW) algorithm is a dynamic programming algorithm that performs a local sequence alignment in order to determine similar regions between two strings or protein sequences. This work explores the use of SW algorithm as an alternative to feature extraction in text categorization. The dataset used for this purpose, contains 2,610 annotated documents with the classes Obese/Non-Obese. This dataset was represented in a matrix form using the Bag of Word approach. The score selected to represent the occurrence of the tokens in each document was the term frequency-inverse document frequency (TF-IDF). In order to extract features for classification, four experiments were conducted: the first experiment used SW to extract features, the second one used unigrams (single word), the third one used bigrams (two word sequence) and the last experiment used a combination of unigrams and bigrams to extract features for classification. To test the effectiveness of the extracted feature set for the four experiments, a Support Vector Machine (SVM) classifier was tuned using 20% of the dataset. The remaining 80% of the dataset together with 5-Fold Cross Validation were used to evaluate and compare the performance of the four experiments of feature extraction. Results from the tuning process suggest that SW performs better than the N-gram based feature extraction. These results were confirmed by using the remaining 80% of the dataset, where SW performed the best (accuracy = 97.10%, weighted average F-measure = 97.07%). The second best was obtained by the combination of unigrams-bigrams (accuracy = 96.04, weighted average F-measure = 95.97) closely followed by the bigrams (accuracy = 94.56%, weighted average F-measure = 94.46%) and finally unigrams (accuracy = 92.96%, weighted average F-measure = 92.90%).Keywords: comorbidities, machine learning, obesity, Smith-Waterman algorithm
Procedia PDF Downloads 29743 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays
Authors: E. Velasco
Abstract:
Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics
Procedia PDF Downloads 8442 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification
Procedia PDF Downloads 23841 Clinical Audit on the Introduction of Apremilast into Ireland
Authors: F. O’Dowd, G. Murphy, M. Roche, E. Shudell, F. Keane, M. O’Kane
Abstract:
Intoduction: Apremilast (Otezla®) is an oral phosphodiesterase-4 (PDE4) inhibitor indicated for treatment of adult patients with moderate to severe plaque psoriasis who have contraindications to have failed or intolerant of standard systemic therapy and/or phototherapy; and adult patients with active psoriatic arthritis. Apremilast influences intracellular regulation of inflammatory mediators. Two randomized, placebo-controlled trials evaluating apremilast in 1426 patients with moderate to severe plague psoriasis (ESTEEM 1 and 2) demonstrated that the commonest adverse reactions (AE’s) leading to discontinuation were nausea (1.6%), diarrhoea (1.0%), and headaches (0.8%). The overall proportion of subjects discontinuing due to adverse reactions was 6.1%. At week 16 these trials demonstrated significant more apremilast-treated patients (33.1%) achieved the primary end point PASI-75 than placebo (5.3%). We began prescribing apremilast in July 2015. Aim: To evaluate efficacy and tolerability of apremilast in an Irish teaching hospital psoriasis population. Methods: A proforma documenting clinical evaluation parameters, prior treatment experience and AE’s; was completed prospectively on all patients commenced on apremilast since July 2015 – July 2017. Data was collected at week 0,6,12,24,36 and week 52 with 20/71 patients having passed week 52. Efficacy was assessed using Psoriasis Area and Severity Index (PASI) and Dermatology Life Quality Index (DLQI). AE’s documented included GI effects, infections, changes in weight and mood. Retrospective chart review and telephone review was utilised for missing data. Results: A total of 71 adult subjects (38 male, 33 female; age range 23-57), with moderate to severe psoriasis, were evaluated. Prior treatment: 37/71 (52%) were systemic/biologic/phototherapy naïve; 14/71 (20%) has prior phototherapy alone;20/71 (28%) had previous systemic/biologic exposure; 12/71 (17%) had both psoriasis and psoriatic arthritis. PASI responses: mean baseline PASI was 10.1 and DLQI was 15.Week 6: N=71, n=15 (21%) achieved PASI 75. Week 12: N= 48, n=6 (13%) achieved a PASI 100%; n=16 (34.5%) achieved a PASI 75. Week 24: N=40, n=10 (25%) achieved a PASI 100; n=15 (37.5%) achieved a PASI 75. Week 52: N= 20, n=4 (20%) achieved a PASI 100; n= 16 (80%) achieved a PASI 75. (N= number of pts having passed the time point indicated, n= number of pts (out of N) achieving PASI or DLQI responses at that time). DLQI responses: week 24: N= 40, n=30 (75%) achieved a DLQI score of 0; n=5 (12.5%) achieved a DLQI score of 1; n=1 (2.5%) achieved a DLQI score of 10 (due to lack of efficacy). Adverse Events: The proportion of patients that discontinued treatment due to AE’s was n=7 (9.8%). One patient experienced nausea alleviated by dose reduction; another developed significant dysgeusia for certain foods, both continued therapy. Two patients lost 2-3 kg. Conclusion: Initial Irish patient experience of Apremilast appears comparable to that observed in trials with good efficacy and tolerability.Keywords: Apremilast, introduction, Ireland, clinical audit
Procedia PDF Downloads 14940 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 16239 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 16038 A Smartphone-Based Real-Time Activity Recognition and Fall Detection System
Authors: Manutchanok Jongprasithporn, Rawiphorn Srivilai, Paweena Pongsopha
Abstract:
Fall is the most serious accident leading to increased unintentional injuries and mortality. Falls are not only the cause of suffering and functional impairments to the individuals, but also the cause of increasing medical cost and days away from work. The early detection of falls could be an advantage to reduce fall-related injuries and consequences of falls. Smartphones, embedded accelerometer, have become a common device in everyday life due to decreasing technology cost. This paper explores a physical activity monitoring and fall detection application in smartphones which is a non-invasive biomedical device to determine physical activities and fall event. The combination of application and sensors could perform as a biomedical sensor to monitor physical activities and recognize a fall. We have chosen Android-based smartphone in this study since android operating system is an open-source and no cost. Moreover, android phone users become a majority of Thai’s smartphone users. We developed Thai 3 Axis (TH3AX) as a physical activities and fall detection application which included command, manual, results in Thai language. The smartphone was attached to right hip of 10 young, healthy adult subjects (5 males, 5 females; aged< 35y) to collect accelerometer and gyroscope data during performing physical activities (e.g., walking, running, sitting, and lying down) and falling to determine threshold for each activity. Dependent variables are including accelerometer data (acceleration, peak acceleration, average resultant acceleration, and time between peak acceleration). A repeated measures ANOVA was performed to test whether there are any differences between DVs’ means. Statistical analyses were considered significant at p<0.05. After finding threshold, the results were used as training data for a predictive model of activity recognition. In the future, accuracies of activity recognition will be performed to assess the overall performance of the classifier. Moreover, to help improve the quality of life, our system will be implemented with patients and elderly people who need intensive care in hospitals and nursing homes in Thailand.Keywords: activity recognition, accelerometer, fall, gyroscope, smartphone
Procedia PDF Downloads 692