Search results for: reading method classification
20687 Neuro-Fuzzy Based Model for Phrase Level Emotion Understanding
Authors: Vadivel Ayyasamy
Abstract:
The present approach deals with the identification of Emotions and classification of Emotional patterns at Phrase-level with respect to Positive and Negative Orientation. The proposed approach considers emotion triggered terms, its co-occurrence terms and also associated sentences for recognizing emotions. The proposed approach uses Part of Speech Tagging and Emotion Actifiers for classification. Here sentence patterns are broken into phrases and Neuro-Fuzzy model is used to classify which results in 16 patterns of emotional phrases. Suitable intensities are assigned for capturing the degree of emotion contents that exist in semantics of patterns. These emotional phrases are assigned weights which supports in deciding the Positive and Negative Orientation of emotions. The approach uses web documents for experimental purpose and the proposed classification approach performs well and achieves good F-Scores.Keywords: emotions, sentences, phrases, classification, patterns, fuzzy, positive orientation, negative orientation
Procedia PDF Downloads 37720686 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: human motion recognition, motion representation, Laban Movement Analysis, Discrete Hidden Markov Model
Procedia PDF Downloads 20620685 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier
Authors: Atanu K Samanta, Asim Ali Khan
Abstract:
Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method
Procedia PDF Downloads 50920684 A Case Study Using Sounds Write and The Writing Revolution to Support Students with Literacy Difficulties
Authors: Emilie Zimet
Abstract:
During our department meetings for teachers of children with learning disabilities and difficulties, we often discuss the best practices for supporting students who come to school with literacy difficulties. After completing Sounds Write and Writing Revolution courses, it seems there is a possibility to link approaches and still maintain fidelity to a program and provide individualised instruction to support students with such difficulties and disabilities. In this case study, the researcher has been focussing on how best to use the knowledge acquired to provide quality intervention that targets the varied areas of challenge that students require support in. Students present to school with a variety of co-occurring reading and writing deficits and with complementary approaches, such as The Writing Revolution and Sounds Write, it is possible to support students to improve their fundamental skills in these key areas. Over the next twelve weeks, the researcher will collect data on current students with whom this approach will be trialled and then compare growth with students from last year who received support using Sounds-Write only. Maintaining fidelity may be a potential challenge as each approach has been tested in a specific format for best results. The aim of this study is to determine if approaches can be combined, so the implementation will need to incorporate elements of both reading (from Sounds Write) and writing (from The Writing Revolution). A further challenge is the time length of each session (25 minutes), so the researcher will need to be creative in the use of time to ensure both writing and reading are targeted while ensuring the programs are implemented. The implementation will be documented using student work samples and planning documents. This work will include a display of findings using student learning samples to demonstrate the importance of co-targeting the reading and writing challenges students come to school with.Keywords: literacy difficulties, intervention, individual differences, methods of provision
Procedia PDF Downloads 5220683 Efficient Schemes of Classifiers for Remote Sensing Satellite Imageries of Land Use Pattern Classifications
Authors: S. S. Patil, Sachidanand Kini
Abstract:
Classification of land use patterns is compelling in complexity and variability of remote sensing imageries data. An imperative research in remote sensing application exploited to mine some of the significant spatially variable factors as land cover and land use from satellite images for remote arid areas in Karnataka State, India. The diverse classification techniques, unsupervised and supervised consisting of maximum likelihood, Mahalanobis distance, and minimum distance are applied in Bellary District in Karnataka State, India for the classification of the raw satellite images. The accuracy evaluations of results are compared visually with the standard maps with ground-truths. We initiated with the maximum likelihood technique that gave the finest results and both minimum distance and Mahalanobis distance methods over valued agriculture land areas. In meanness of mislaid few irrelevant features due to the low resolution of the satellite images, high-quality accord between parameters extracted automatically from the developed maps and field observations was found.Keywords: Mahalanobis distance, minimum distance, supervised, unsupervised, user classification accuracy, producer's classification accuracy, maximum likelihood, kappa coefficient
Procedia PDF Downloads 18220682 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network
Authors: Yuntao Liu, Lei Wang, Haoran Xia
Abstract:
Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability
Procedia PDF Downloads 6420681 Job Shop Scheduling: Classification, Constraints and Objective Functions
Authors: Majid Abdolrazzagh-Nezhad, Salwani Abdullah
Abstract:
The job-shop scheduling problem (JSSP) is an important decision facing those involved in the fields of industry, economics and management. This problem is a class of combinational optimization problem known as the NP-hard problem. JSSPs deal with a set of machines and a set of jobs with various predetermined routes through the machines, where the objective is to assemble a schedule of jobs that minimizes certain criteria such as makespan, maximum lateness, and total weighted tardiness. Over the past several decades, interest in meta-heuristic approaches to address JSSPs has increased due to the ability of these approaches to generate solutions which are better than those generated from heuristics alone. This article provides the classification, constraints and objective functions imposed on JSSPs that are available in the literature.Keywords: job-shop scheduling, classification, constraints, objective functions
Procedia PDF Downloads 44220680 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 41720679 Written Argumentative Texts in Elementary School: The Development of Text Structure and Its Relation to Reading Comprehension
Authors: Sara Zadunaisky Ehrlich, Batia Seroussi, Anat Stavans
Abstract:
Text structure is a parameter of text quality. This study investigated the structure of written argumentative texts produced by elementary school age children. We set two objectives: to identify and trace the structural components of the argumentative texts and to investigate whether reading comprehension skills were correlated with text structure. 293 school children from 2nd to 5th grades were asked to write two argumentative texts about informal or everyday life controversial topics and completed two reading tasks that targeted different levels of text comprehension. The findings indicated, on the one hand, significant developmental differences between mature and more novice writers in terms of text length and mean proportion of clauses produced for a better elaboration of the different text components. On the other hand, with certain fluctuations, no meaningful differences were found in terms of presence of text structure: at all grade levels, elementary school children produced the basic and minimal structure that included the writer's argument and reasons or arguments' supports. Counter-arguments were scarce even in the upper grades. While the children captured that essentially an argument must be justified, the more the number of supports produced, the fewer the clauses the children produced. Last, weak to mild relations were found between reading comprehension and argumentative text structure. Nevertheless, children who scored higher on sophisticated questions that require inferential or world knowledge displayed more elaborated structures in terms of text length and size of supports to the writer's argument. These findings indicate how school-age children perceive the basic template of an argument with future implications regarding how to elaborate written arguments.Keywords: argumentative text, text structure, elementary school children, written argumentations
Procedia PDF Downloads 16420678 Development and Acceptance of a Proposed Module for Enhancing the Reading and Writing Skills in Baybayin: The Traditional Writing System in the Philippines
Authors: Maria Venus G. Solares
Abstract:
The ancient Filipinos had their own spelling or alphabet that differed from the modern Roman alphabet brought by the Spaniards. It consists of seventeen letters, three vowels, and fourteen consonants and is called Baybayin. The word Baybayin is a Tagalog word that refers to all the letters used in writing a language, an alphabet; however, it is also a syllable. The House Bill 4395, first proposed by Rep. Leopoldo Bataoil of the second district of Pangasinan in 2011, which later became House Bill 1022 of what he called The Declaration of the Baybayin as the National Writing System of the Philippines, prompted the researcher to conduct a study on the topic. The main objective of this study was to develop and assess the proposed module for enhancing the reading and writing skills in Baybayin of the students. The researchers wanted to ensure the acceptability of the Baybayin using the proposed module and meet the needs of students in developing their ability to read and write Baybayin through the module. The researchers used quasi-experimental research in this study. The data was collected through the initial and final analysis of the students of Adamson University's ABM 1102 using convenient sampling techniques. Based on statistical analysis of data using weighted mean, standard deviation, and paired t-tests, the proposed module helped improve the students' literacy skills, and the response exercises in the proposed module changed the acceptability of the Baybayin in their minds. The study showed that there was an important difference in the scores of students before and after the use of the module. The student's response to the assessment of their reading and writing skills on Baybayin was highly acceptable. This study will help develop the reading and writing skills of the students in Baybayin and teach Baybayin in response to the revival of a part of Philippine culture that has been long forgotten.Keywords: Baybayin, proposed module, skill, acceptability
Procedia PDF Downloads 14520677 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 32520676 Myanmar Character Recognition Using Eight Direction Chain Code Frequency Features
Authors: Kyi Pyar Zaw, Zin Mar Kyu
Abstract:
Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.Keywords: chain code frequency, character recognition, feature extraction, features matching, segmentation
Procedia PDF Downloads 31820675 The Effects of the Inference Process in Reading Texts in Arabic
Authors: May George
Abstract:
Inference plays an important role in the learning process and it can lead to a rapid acquisition of a second language. When learning a non-native language, i.e., a critical language like Arabic, the students depend on the teacher’s support most of the time to learn new concepts. The students focus on memorizing the new vocabulary and stress on learning all the grammatical rules. Hence, the students became mechanical and cannot produce the language easily. As a result, they are unable to predict the meaning of words in the context by relying heavily on the teacher, in that they cannot link their prior knowledge or even identify the meaning of the words without the support of the teacher. This study explores how the teacher guides students learning during the inference process and what are the processes of learning that can direct student’s inference.Keywords: inference, reading, Arabic, language acquisition
Procedia PDF Downloads 53020674 Comparison of Linear Discriminant Analysis and Support Vector Machine Classifications for Electromyography Signals Acquired at Five Positions of Elbow Joint
Authors: Amna Khan, Zareena Kausar, Saad Malik
Abstract:
Bio Mechatronics has extended applications in the field of rehabilitation. It has been contributing since World War II in improving the applicability of prosthesis and assistive devices in real life scenarios. In this paper, classification accuracies have been compared for two classifiers against five positions of elbow. Electromyography (EMG) signals analysis have been acquired directly from skeletal muscles of human forearm for each of the three defined positions and at modified extreme positions of elbow flexion and extension using 8 electrode Myo armband sensor. Features were extracted from filtered EMG signals for each position. Performance of two classifiers, support vector machine (SVM) and linear discriminant analysis (LDA) has been compared by analyzing the classification accuracies. SVM illustrated classification accuracies between 90-96%, in contrast to 84-87% depicted by LDA for five defined positions of elbow keeping the number of samples and selected feature the same for both SVM and LDA.Keywords: classification accuracies, electromyography, linear discriminant analysis (LDA), Myo armband sensor, support vector machine (SVM)
Procedia PDF Downloads 36620673 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 29520672 Outbound Tourism in Developed Countries: Analysis of the Trends, Behavior and the Transformation of the Moroccan Demand for International Travels
Authors: M. Boukhrouk, R. Ed-Dali
Abstract:
Outbound tourism in Morocco, as in the majority of developing countries, reveals some of the aspects of inequality between the north and the south. Considered by some researchers as one of the facets of the development crisis, access to tourism and especially international tourism is a chance for a small minority with financial means, while the vast portions of the population dream rather of immigrating to a developed country for the sake of improving their standard of living. The right to travel is also limited by visa requirements, procedures in host countries, security and technical measures and creates discrimination in the practice of tourism. These conditions do not seem to be favorable to the democratization of the practice of international tourism for the populations of the southern countries. This paper is a contribution to the reading of the trends of outbound tourism in developing countries through the example of Morocco. It highlights the different aspects of Moroccan outbound tourism, destinations and the behavior of tourists through an analysis of the offer of a sample of 50 travel agencies. In the same vein, it offers a reading grid of the possibilities offered for the development of outbound tourism and the various existing obstacles to the democratization of international outbound tourism in the southern countries. This reading reveals the transformation in the behavior of Moroccan international tourists as well as the profound changes in Moroccan society, through a model of statistical analysis.Keywords: demand, Hajj, Morocco, outbound tourism, tendency, Umrah
Procedia PDF Downloads 17320671 Curvelet Features with Mouth and Face Edge Ratios for Facial Expression Identification
Authors: S. Kherchaoui, A. Houacine
Abstract:
This paper presents a facial expression recognition system. It performs identification and classification of the seven basic expressions; happy, surprise, fear, disgust, sadness, anger, and neutral states. It consists of three main parts. The first one is the detection of a face and the corresponding facial features to extract the most expressive portion of the face, followed by a normalization of the region of interest. Then calculus of curvelet coefficients is performed with dimensionality reduction through principal component analysis. The resulting coefficients are combined with two ratios; mouth ratio and face edge ratio to constitute the whole feature vector. The third step is the classification of the emotional state using the SVM method in the feature space.Keywords: facial expression identification, curvelet coefficient, support vector machine (SVM), recognition system
Procedia PDF Downloads 23120670 Effect of Cement Amount on California Bearing Ratio Values of Different Soil
Authors: Ayse Pekrioglu Balkis, Sawash Mecid
Abstract:
Due to continued growth and rapid development of road construction in worldwide, road sub-layers consist of soil layers, therefore, identification and recognition of type of soil and soil behavior in different condition help to us to select soil according to specification and engineering characteristic, also if necessary sometimes stabilize the soil and treat undesirable properties of soils by adding materials such as bitumen, lime, cement, etc. If the soil beneath the road is not done according to the standards and construction will need more construction time. In this case, a large part of soil should be removed, transported and sometimes deposited. Then purchased sand and gravel is transported to the site and full depth filled and compacted. Stabilization by cement or other treats gives an opportunity to use the existing soil as a base material instead of removing it and purchasing and transporting better fill materials. Classification of soil according to AASHTOO system and USCS help engineers to anticipate soil behavior and select best treatment method. In this study soil classification and the relation between soil classification and stabilization method is discussed, cement stabilization with different percentages have been selected for soil treatment based on NCHRP. There are different parameters to define the strength of soil. In this study, CBR will be used to define the strength of soil. Cement by percentages, 0%, 3%, 7% and 10% added to soil for evaluation effect of added cement to CBR of treated soil. Implementation of stabilization process by different cement content help engineers to select an economic cement amount for the stabilization process according to project specification and characteristics. Stabilization process in optimum moisture content (OMC) and mixing rate effect on the strength of soil in the laboratory and field construction operation have been performed to see the improvement rate in strength and plasticity. Cement stabilization is quicker than a universal method such as removing and changing field soils. Cement addition increases CBR values of different soil types by the range of 22-69%.Keywords: California Bearing Ratio, cement stabilization, clayey soil, mechanical properties
Procedia PDF Downloads 39420669 Optimizing the Readability of Orthopaedic Trauma Patient Education Materials Using ChatGPT-4
Authors: Oscar Covarrubias, Diane Ghanem, Christopher Murdock, Babar Shafiq
Abstract:
Introduction: ChatGPT is an advanced language AI tool designed to understand and generate human-like text. The aim of this study is to assess the ability of ChatGPT-4 to re-write orthopaedic trauma patient education materials at the recommended 6th-grade level. Methods: Two independent reviewers accessed ChatGPT-4 (chat.openai.com) and gave identical instructions to simplify the readability of provided text to a 6th-grade level. All trauma-related articles by the Orthopaedic Trauma Association (OTA) and American Academy of Orthopaedic Surgeons (AAOS) were sequentially provided. The academic grade level was determined using the Flesh-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE). Paired t-tests and Wilcox-rank sum tests were used to compare the FKGL and FRE between the ChatGPT-4 revised and original text. Inter-rater correlation coefficient (ICC) was used to assess variability in ChatGPT-4 generated text between the two reviewers. Results: ChatGPT-4 significantly reduced FKGL and increased FRE scores in the OTA (FKGL: 5.7±0.5 compared to the original 8.2±1.1, FRE: 76.4±5.7 compared to the original 65.5±6.6, p < 0.001) and AAOS articles (FKGL: 5.8±0.8 compared to the original 8.9±0.8, FRE: 76±5.5 compared to the original 56.7±5.9, p < 0.001). On average, 14.6% of OTA and 28.6% of AAOS articles required at least two revisions by ChatGPT-4 to achieve a 6th-grade reading level. ICC demonstrated poor reliability for FKGL (OTA 0.24, AAOS 0.45) and moderate reliability for FRE (OTA 0.61, AAOS 0.73). Conclusion: This study provides a novel, simple and efficient method using language AI to optimize the readability of patient education content which may only require the surgeon’s final proofreading. This method would likely be as effective for other medical specialties.Keywords: artificial intelligence, AI, chatGPT, patient education, readability, trauma education
Procedia PDF Downloads 7120668 Classification System for Soft Tissue Injuries of Face: Bringing Objectiveness to Injury Severity
Authors: Garg Ramneesh, Uppal Sanjeev, Mittal Rajinder, Shah Sheerin, Jain Vikas, Singla Bhupinder
Abstract:
Introduction: Despite advances in trauma care, a classification system for soft tissue injuries of the face still needs to be objectively defined. Aim: To develop a classification system for soft tissue injuries of the face; that is objective, easy to remember, reproducible, universally applicable, aids in surgical management and helps to develop a structured data that can be used for future use. Material and Methods: This classification system includes those patients that need surgical management of facial injuries. Associated underlying bony fractures have been intentionally excluded. Depending upon the severity of soft tissue injury, these can be graded from 0 to IV (O-Abrasions, I-lacerations, II-Avulsion injuries with no skin loss, III-Avulsion injuries with skin loss that would need graft or flap cover, and IV-complex injuries). Anatomically, the face has been divided into three zones (Zone 1/2/3), as per aesthetic subunits. Zone 1e stands for injury of eyebrows; Zones 2 a/b/c stand for nose, upper eyelid and lower eyelid respectively; Zones 3 a/b/c stand for upper lip, lower lip and cheek respectively. Suffices R and L stand for right or left involved side, B for presence of foreign body like glass or pellets, C for extensive contamination and D for depth which can be graded as D 1/2/3 if depth is still fat, muscle or bone respectively. I is for damage to facial nerve or parotid duct. Results and conclusions: This classification system is easy to remember, clinically applicable and would help in standardization of surgical management of soft tissue injuries of face. Certain inherent limitations of this classification system are inability to classify sutured wounds, hematomas and injuries along or against Langer’s lines.Keywords: soft tissue injuries, face, avulsion, classification
Procedia PDF Downloads 38220667 Exploring Reading into Writing: A Corpus-Based Analysis of Postgraduate Students’ Literature Review Essays
Authors: Tanzeela Anbreen, Ammara Maqsood
Abstract:
Reading into writing is one of university students' most required academic skills. The current study explored postgraduate university students’ writing quality using a corpus-based approach. Twelve postgraduate students’ literature review essays were chosen for the corpus-based analysis. These essays were chosen because students had to incorporate multiple reading sources in these essays, which was a new writing exercise for them. The students were provided feedback at least two times which comprised of the written comments by the tutor highlighting the areas of improvement and also by using the ‘track changes’ function. This exercise was repeated two times, and students submitted two drafts. This investigation included only the finally submitted work of the students. A corpus-based approach was adopted to analyse the essays because it promotes autonomous discovery and personalised learning. The aim of this analysis was to understand the existing level of students’ writing before the start of their postgraduate thesis. Text Inspector was used to analyse the quality of essays. With the help of the Text Inspector tool, the vocabulary used in the essays was compared to the English Vocabulary Profile (EVP), which describes what learners know and can do at each Common European Framework of Reference (CEFR) level. Writing quality was also measured for the Flesch reading ease score, which is a standard to describe the ease of understanding the writing content. The results reflected that students found writing essays using multiple sources challenging. In most essays, the vocabulary level achieved was between B1-B2 of the CEFL level. The study recommends that students need extensive training in developing academic writing skills, particularly in writing the literature review type assignment, which requires multiple sources citations.Keywords: literature review essays, postgraduate students, corpus-based analysis, vocabulary proficiency
Procedia PDF Downloads 7120666 A Research Analysis on the Source Technology and Convergence Types
Authors: Kwounghee Choi
Abstract:
Technological convergence between the various sectors is expected to have a very large impact on future industrial and economy. This study attempts to do empirical approach between specific technologies’ classification. For technological convergence classification, it is necessary to set the target technology to be analyzed. This study selected target technology from national research and development plan. At first we found a source technology for analysis. Depending on the weight of source technology, NT-based, BT-based, IT-based, ET-based, CS-based convergence types were classified. This study aims to empirically show the concept of convergence technology and convergence types. If we use the source technology to classify convergence type, it will be useful to make practical strategies of convergence technology.Keywords: technology convergence, source technology, convergence type, R&D strategy, technology classification
Procedia PDF Downloads 48320665 GPRS Based Automatic Metering System
Authors: Constant Akama, Frank Kulor, Frederick Agyemang
Abstract:
All over the world, due to increasing population, electric power distribution companies are looking for more efficient ways of reading electricity meters. In Ghana, the prepaid metering system was introduced in 2007 to replace the manual system of reading which was fraught with inefficiencies. However, the prepaid system in Ghana is not capable of integration with online systems such as e-commerce platforms and remote monitoring systems. In this paper, we present a design framework for an automatic metering system that can be integrated with e-commerce platforms and remote monitoring systems. The meter was designed using ADE 7755 which reads the energy consumption and the reading is processed by a microcontroller connected to Sim900 General Packet Radio Service module containing a GSM chip provisioned with an Access Point Name. The system also has a billing server and a management server located at the premises of the utility company which communicate with the meter over a Virtual Private Network and GPRS. With this system, customers can buy credit online and the credit will be transferred securely to the meter. Also, when a fault is reported, the utility company can log into the meter remotely through the management server to troubleshoot the problem.Keywords: access point name, general packet radio service, GSM, virtual private network
Procedia PDF Downloads 29820664 Machine Learning for Feature Selection and Classification of Systemic Lupus Erythematosus
Authors: H. Zidoum, A. AlShareedah, S. Al Sawafi, A. Al-Ansari, B. Al Lawati
Abstract:
Systemic lupus erythematosus (SLE) is an autoimmune disease with genetic and environmental components. SLE is characterized by a wide variability of clinical manifestations and a course frequently subject to unpredictable flares. Despite recent progress in classification tools, the early diagnosis of SLE is still an unmet need for many patients. This study proposes an interpretable disease classification model that combines the high and efficient predictive performance of CatBoost and the model-agnostic interpretation tools of Shapley Additive exPlanations (SHAP). The CatBoost model was trained on a local cohort of 219 Omani patients with SLE as well as other control diseases. Furthermore, the SHAP library was used to generate individual explanations of the model's decisions as well as rank clinical features by contribution. Overall, we achieved an AUC score of 0.945, F1-score of 0.92 and identified four clinical features (alopecia, renal disorders, cutaneous lupus, and hemolytic anemia) along with the patient's age that was shown to have the greatest contribution on the prediction.Keywords: feature selection, classification, systemic lupus erythematosus, model interpretation, SHAP, Catboost
Procedia PDF Downloads 8120663 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 31920662 Relationship Between Reading Comprehension and Achievement in Science Among Grade Eleven Bilingual Students in a Secondary School, Thailand
Authors: Simon Mauma Efange
Abstract:
The main aims of this research were to describe, in co-relational terms, the relationship, if any, between reading comprehension and academic achievement in science studied at the secondary level and, secondly, to find out possible trends in gender differences, such as whether boys would perform better than girls or vice versa. This research employed a quantitative design. Two kinds of instruments were employed: the Oxford Online Placement Test and the Local Assessment System Test. The Oxford Online Placement Test assesses students' English level quickly and easily. The results of these tests were subjected to statistical analysis using a special statistical software called SPSS. Statistical tools such as mean, standard deviation, percentages, frequencies, t-tests, and Pearson’s coefficient of correlation were used for the analysis of the results. Results of the t-test showed that the means are significantly different. Calculating the p-value revealed that the results were extremely statistically significant at p <.05. The value of r (Pearson correlation coefficient) was 0.2868. Although technically there is a positive correlation, the relationship between the variables is only weak (the closer the value is to zero, the weaker the relationship). However, in conclusion, calculations from the t-test using SPSS revealed that the results were statistically significant at p <.05, confirming a relationship between the two variables, and high scores in reading will give rise to slightly high scores in science. The research also revealed that having a high score in reading comprehension doesn’t necessarily mean having a high score in science or vice versa. Female subjects performed much better than male subjects in both tests, which is in line with the literature reviewed for this research.Keywords: achievement in science, achievement in English, and bilingual students, relationship
Procedia PDF Downloads 4620661 Microarray Gene Expression Data Dimensionality Reduction Using PCA
Authors: Fuad M. Alkoot
Abstract:
Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.Keywords: PCA, gene expression, dimensionality reduction, classification, autism
Procedia PDF Downloads 55920660 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 15320659 Competing Risks Modeling Using within Node Homogeneity Classification Tree
Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya
Abstract:
To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree
Procedia PDF Downloads 27020658 Performance Comparison of Outlier Detection Techniques Based Classification in Wireless Sensor Networks
Authors: Ayadi Aya, Ghorbel Oussama, M. Obeid Abdulfattah, Abid Mohamed
Abstract:
Nowadays, many wireless sensor networks have been distributed in the real world to collect valuable raw sensed data. The challenge is to extract high-level knowledge from this huge amount of data. However, the identification of outliers can lead to the discovery of useful and meaningful knowledge. In the field of wireless sensor networks, an outlier is defined as a measurement that deviates from the normal behavior of sensed data. Many detection techniques of outliers in WSNs have been extensively studied in the past decade and have focused on classic based algorithms. These techniques identify outlier in the real transaction dataset. This survey aims at providing a structured and comprehensive overview of the existing researches on classification based outlier detection techniques as applicable to WSNs. Thus, we have identified key hypotheses, which are used by these approaches to differentiate between normal and outlier behavior. In addition, this paper tries to provide an easier and a succinct understanding of the classification based techniques. Furthermore, we identified the advantages and disadvantages of different classification based techniques and we presented a comparative guide with useful paradigms for promoting outliers detection research in various WSN applications and suggested further opportunities for future research.Keywords: bayesian networks, classification-based approaches, KPCA, neural networks, one-class SVM, outlier detection, wireless sensor networks
Procedia PDF Downloads 496