Search results for: text processing
4038 A Comparative Study on the Use of Learning Resources in Learning Biochemistry by MBBS Students at Ras Al Khaimah Medical and Health Sciences University, UAE
Authors: B. K. Manjunatha Goud, Aruna Chanu Oinam
Abstract:
The undergraduate medical curriculum is oriented towards training the students to undertake the responsibilities of a physician. During the training period, adequate emphasis is placed on inculcating logical and scientific habits of thought; clarity of expression and independence of judgment; and ability to collect and analyze information and to correlate them. At Ras Al Khaimah Medical and Health Sciences University (RAKMHSU), Biochemistry a basic medical science subject is taught in the 1st year of 5 years medical course with vertical interdisciplinary interaction with all subjects, which needs to be taught and learned adequately by the students to be related to clinical case or clinical problem in medicine and future diagnostics so that they can practice confidently and skillfully in the community. Based on these facts study was done to know the extent of usage of library resources by the students and the impact of study materials on their preparation for examination. It was a comparative cross sectional study included 100 and 80 1st and 2nd-year students who had successfully completed Biochemistry course. The purpose of the study was explained to all students [participants]. Information was collected on a pre-designed, pre-tested and self-administered questionnaire. The questionnaire was validated by the senior faculties and pre tested on students who were not involved in the study. The study results showed that 80.30% and 93.15% of 1st and 2nd year students have the clear idea of course outline given in course handout or study guide. We also found a statistically significant number of students agreed that they were benefited from the practical session and writing notes in the class hour. A high percentage of students [50% and 62.02%] disagreed that that reading only the handouts is enough for their examination as compared to other students. The study also showed that only 35% and 41% of students visited the library on daily basis for the learning process, around 65% of students were using lecture notes and text books as a tool for learning and to understand the subject and 45% and 53% of students used the library resources (recommended text books) compared to online sources before the examinations. The results presented here show that students perceived that e-learning resources like power point presentations along with text book reading using SQ4R technique had made a positive impact on various aspects of their learning in Biochemistry. The use of library by students has overall positive impact on learning process especially in medical field enhances the outcome, and medical students are better equipped to treat the patient. But it’s also true that use of library use has been in decline which will impact the knowledge aspects and outcome. In conclusion, a student has to be taught how to use the library as learning tool apart from lecture handouts.Keywords: medical education, learning resources, study guide, biochemistry
Procedia PDF Downloads 1784037 Control the Flow of Big Data
Authors: Shizra Waris, Saleem Akhtar
Abstract:
Big data is a research area receiving attention from academia and IT communities. In the digital world, the amounts of data produced and stored have within a short period of time. Consequently this fast increasing rate of data has created many challenges. In this paper, we use functionalism and structuralism paradigms to analyze the genesis of big data applications and its current trends. This paper presents a complete discussion on state-of-the-art big data technologies based on group and stream data processing. Moreover, strengths and weaknesses of these technologies are analyzed. This study also covers big data analytics techniques, processing methods, some reported case studies from different vendor, several open research challenges and the chances brought about by big data. The similarities and differences of these techniques and technologies based on important limitations are also investigated. Emerging technologies are suggested as a solution for big data problems.Keywords: computer, it community, industry, big data
Procedia PDF Downloads 1944036 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 3234035 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours
Authors: Charlotte Entwistle, Ryan Boyd
Abstract:
Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data
Procedia PDF Downloads 3534034 Tool Wear Monitoring of High Speed Milling Based on Vibratory Signal Processing
Authors: Hadjadj Abdechafik, Kious Mecheri, Ameur Aissa
Abstract:
The objective of this study is to develop a process of treatment of the vibratory signals generated during a horizontal high speed milling process without applying any coolant in order to establish a monitoring system able to improve the machining performance. Thus, many tests were carried out on the horizontal high speed centre (PCI Météor 10), in given cutting conditions, by using a milling cutter with only one insert and measured its frontal wear from its new state that is considered as a reference state until a worn state that is considered as unsuitable for the tool to be used. The results obtained show that the first harmonic follow well the evolution of frontal wear, on another hand a wavelet transform is used for signal processing and is found to be useful for observing the evolution of the wavelet approximations through the cutting tool life. The power and the Root Mean Square (RMS) values of the wavelet transformed signal gave the best results and can be used for tool wear estimation. All this features can constitute the suitable indicators for an effective detection of tool wear and then used for the input parameters of an online monitoring system. Although we noted the remarkable influence of the machining cycle on the quality of measurements by the introduction of a bias on the signal, this phenomenon appears in particular in horizontal milling and in the majority of studies is ignored.Keywords: flank wear, vibration, milling, signal processing, monitoring
Procedia PDF Downloads 5984033 Big Data Analysis with RHadoop
Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim
Abstract:
It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop
Procedia PDF Downloads 4374032 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning
Authors: Hong Zhang
Abstract:
The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning
Procedia PDF Downloads 1444031 Nurse´s Interventions in Patients with Dementia During Clinical Practice: A Literature Review
Authors: Helga Martins, Idália Matias
Abstract:
Background: Dementia is an important research topic since that life expectancy worldwide is increasing, so people are getting older. The aging of populations has a major impact on the increase in dementia, and nurses play a major role in taking care of these patients. Therefore, the implementation of nursing interventions based on evidence is vital so that we are aware of what we can do in clinical practice in order to provide patient cantered care to patients with dementia. Aim: To identify the nurse´s interventions in patients with dementia during clinical practice. Method: Literature review grounded on an electronic search in the EBSCOhost platform (CINAHL Plus with Full Text, MEDLINE with Full Text, and Nursing & Allied Health Collection), using the search terms of "dementia" AND "nurs*" AND “interventions” in the abstracts. The inclusion criteria were: original papers published up to June 2021. A total of 153 results after de duplicate removal we kept 104. After the application of the inclusion criteria, we included 15 studies This literature review was performed by two independent researchers. Results: A total of 15 results about nurses’ interventions in patients with dementia were included in the study. The major interventions are therapeutic communication strategies, environmental management of stressors involving family/caregivers; strategies to promote patient safety, and assistance in activities of daily living in patients who are clinically deteriorated. Conclusion: Taking care of people with dementia is a complex and demanding task. Nurses are required to have a set of skills and competences in order to provide nursing interventions. We highlight that is necessary an awareness in nursing education regarding providing nursing care to patients with dementia.Keywords: dementia, interventions, nursing, review
Procedia PDF Downloads 1574030 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients
Authors: Lise Paesen, Marielle Leijten
Abstract:
People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.Keywords: Alzheimer's disease, experimental design, language decline, writing process
Procedia PDF Downloads 2744029 A Convolutional Deep Neural Network Approach for Skin Cancer Detection Using Skin Lesion Images
Authors: Firas Gerges, Frank Y. Shih
Abstract:
Malignant melanoma, known simply as melanoma, is a type of skin cancer that appears as a mole on the skin. It is critical to detect this cancer at an early stage because it can spread across the body and may lead to the patient's death. When detected early, melanoma is curable. In this paper, we propose a deep learning model (convolutional neural networks) in order to automatically classify skin lesion images as malignant or benign. Images underwent certain pre-processing steps to diminish the effect of the normal skin region on the model. The result of the proposed model showed a significant improvement over previous work, achieving an accuracy of 97%.Keywords: deep learning, skin cancer, image processing, melanoma
Procedia PDF Downloads 1504028 Investigating Malaysian Prereader’s Cognitive Processes when Reading English Picture Storybooks: A Comparative Eye-Tracking Experiment
Authors: Siew Ming Thang, Wong Hoo Keat, Chee Hao Sue, Fung Lan Loo, Ahju Rosalind
Abstract:
There are numerous studies that explored young learners’ literacy skills in Malaysia but none that uses the eye-tracking device to track their cognitive processes when reading picture storybooks. This study used this method to investigate two groups of prereaders’ cognitive processes in four conditions. (1) A congruent picture was presented, and a matching narration was read aloud by a recorder; (2) Children heard a narration telling about the same characters in the picture but involves a different scene; (3) Only a picture with matching text was present; (4) Students only heard the reading aloud of the text on the screen. The two main objectives of this project are to test which content of pictures helps the prereaders (i.e., young children who have not received any formal reading instruction) understand the narration and whether children try to create a coherent mental representation from the oral narration and the pictures. The study compares two groups of children from two different kindergartens. Group1: 15 Chinese children; Group2: 17 Malay children. The medium of instruction was English. An eye-tracker were used to identify Areas of Interest (AOI) of each picture and the five target elements and calculate number of fixations and total time spent on fixation of pictures and written texts. Two mixed factorial ANOVAs with the storytelling performance (good, average, or weak) and vocabulary level (low, medium, high) as between-subject variables, and the Areas of Interests (AOIs) and display conditions as the within-subject variables were performedon the variables.Keywords: eye-tracking, cognitive processes, literacy skills, prereaders, visual attention
Procedia PDF Downloads 954027 Developing the Skills of Reading Comprehension of Learners of English as a Second Language
Authors: Indu Gamage
Abstract:
Though commonly utilized as a language improvement technique, reading has not been fully employed by both language teachers and learners to develop reading comprehension skills in English as a second language. In a Sri Lankan context, this area has to be delved deep into as the learners’ show more propensity to analyze. Reading comprehension is an area that most language teachers and learners struggle with though it appears easy. Most ESL learners engage in reading tasks without being properly aware of the objective of doing reading comprehension. It is observed that when doing reading tasks, the language learners’ concern is more on the meanings of individual words than on the overall comprehension of the given text. The passiveness with which the ESL learners engage themselves in reading comprehension makes reading a tedious task for the learner thereby giving the learner a sense of disappointment at the end. Certain reading tasks take the form of translations. The active cognitive participation of the learner in the mode of using productive strategies for predicting, employing schemata and using contextual clues seems quite less. It was hypothesized that the learners’ lack of knowledge of the productive strategies of reading was the major obstacle that makes reading comprehension a tedious task for them. This study is based on a group of 30 tertiary students who read English only as a fundamental requirement for their degree. They belonged to the Faculty of Humanities and Social Sciences of the University of Ruhuna, Sri Lanka. Almost all learners hailed from areas where English was hardly utilized in their day to day conversations. The study is carried out in the mode of a questionnaire to check their opinions on reading and a test to check whether the learners are using productive strategies of reading when doing reading comprehension tasks. The test comprised reading questions covering major productive strategies for reading. Then the results were analyzed to see the degree of their active engagement in comprehending the text. The findings depicted the validity of the hypothesis as grounds behind the difficulties related to reading comprehension.Keywords: reading, comprehension, skills, reading strategies
Procedia PDF Downloads 1754026 Enhancing Word Meaning Retrieval Using FastText and Natural Language Processing Techniques
Authors: Sankalp Devanand, Prateek Agasimani, Shamith V. S., Rohith Neeraje
Abstract:
Machine translation has witnessed significant advancements in recent years, but the translation of languages with distinct linguistic characteristics, such as English and Sanskrit, remains a challenging task. This research presents the development of a dedicated English-to-Sanskrit machine translation model, aiming to bridge the linguistic and cultural gap between these two languages. Using a variety of natural language processing (NLP) approaches, including FastText embeddings, this research proposes a thorough method to improve word meaning retrieval. Data preparation, part-of-speech tagging, dictionary searches, and transliteration are all included in the methodology. The study also addresses the implementation of an interpreter pattern and uses a word similarity task to assess the quality of word embeddings. The experimental outcomes show how the suggested approach may be used to enhance word meaning retrieval tasks with greater efficacy, accuracy, and adaptability. Evaluation of the model's performance is conducted through rigorous testing, comparing its output against existing machine translation systems. The assessment includes quantitative metrics such as BLEU scores, METEOR scores, Jaccard Similarity, etc.Keywords: machine translation, English to Sanskrit, natural language processing, word meaning retrieval, fastText embeddings
Procedia PDF Downloads 454025 Pedestrian Safe Bumper Design from Commingled Glass Fiber/Polypropylene Reinforced Sandwich Composites
Authors: L. Onal
Abstract:
The aim of this study is to optimize manufacturing process for thermoplastic sandwich composite structures for the pedestrian safety of automobiles subjected to collision condition. In particular, cost-effective manufacturing techniques for sandwich structures with commingled GF/PP skins and low-density foam cores are being investigated. The performance of these structures under bending load is being studied. Samples are manufactured using compression moulding technique. The relationship of this performance to processing parameters such as mould temperature, moulding time, moulding pressure and sequence of the layers during moulding is being investigated. The results of bending tests are discussed in the light of the moulding conditions and conclusions are given regarding optimum set of processing conditions using the compression moulding routeKeywords: twintex, flexural properties, automobile composites, sandwich structures
Procedia PDF Downloads 4324024 Signal Processing of Barkhausen Noise Signal for Assessment of Increasing Down Feed in Surface Ground Components with Poor Micro-Magnetic Response
Authors: Tanmaya Kumar Dash, Tarun Karamshetty, Soumitra Paul
Abstract:
The Barkhausen Noise Analysis (BNA) technique has been utilized to assess surface integrity of steels. But the BNA technique is not very successful in evaluating surface integrity of ground steels that exhibit poor micro-magnetic response. A new approach has been proposed for the processing of BN signal with Fast Fourier transforms while Wavelet transforms has been used to remove noise from the BN signal, with judicious choice of the ‘threshold’ value, when the micro-magnetic response of the work material is poor. In the present study, the effect of down feed induced upon conventional plunge surface grinding of hardened bearing steel has been investigated along with an ultrasonically cleaned, wet polished and a sample ground with spark out technique for benchmarking. Moreover, the FFT analysis has been established, at different sets of applied voltages and applied frequency and the pattern of the BN signal in the frequency domain is analyzed. The study also depicts the wavelet transforms technique with different levels of decomposition and different mother wavelets, which has been used to reduce the noise value in BN signal of materials with poor micro-magnetic response, in order to standardize the procedure for all BN signals depending on the frequency of the applied voltage.Keywords: barkhausen noise analysis, grinding, magnetic properties, signal processing, micro-magnetic response
Procedia PDF Downloads 6674023 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 1484022 Characterization of Shiga Toxin Escherichia coli Recovered from a Beef Processing Facility within Southern Ontario and Comparative Performance of Molecular Diagnostic Platforms
Authors: Jessica C. Bannon, Cleso M. Jordao Jr., Mohammad Melebari, Carlos Leon-Velarde, Roger Johnson, Keith Warriner
Abstract:
There has been an increased incidence of non-O157 Shiga Toxin Escherichia coli (STEC) with six serotypes (Top 6) being implicated in causing haemolytic uremic syndrome (HUS). Beef has been suggested to be a significant vehicle for non-O157 STEC although conclusive evidence has yet to be obtained. The following aimed to determine the prevalence of the Top 6 non-O157 STEC in beef processing using three different diagnostic platforms then characterize the recovered isolates. Hide, carcass and environmental swab samples (n = 60) were collected from a beef processing facility over a 12 month period. Enriched samples were screened using Biocontrol GDS, BAX or PALLgene molecular diagnostic tests. Presumptive non-O157 STEC positive samples were confirmed using conventional PCR and serology. STEC was detected by GDS (55% positive), BAX (85% positive), and PALLgene (93%). However, during confirmation testing only 8 of the 60 samples (13%) were found to harbour STEC. Interestingly, the presence of virulence factors in the recovered isolates was unstable and readily lost during subsequent sub-culturing. There is a low prevalence of Top 6 non-O157 STEC associated with beef although other serotypes are encountered. Yet, the instability of the virulence factors in recovered strains would question their clinical relevance.Keywords: beef, food microbiology, shiga toxin, STEC
Procedia PDF Downloads 4634021 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images
Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam
Abstract:
Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification
Procedia PDF Downloads 3474020 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation
Authors: Jonah Kenei, Elisha Opiyo
Abstract:
Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.Keywords: classification, electronic health records, narrative texts, visualization
Procedia PDF Downloads 1184019 Prediction, Production, and Comprehension: Exploring the Influence of Salience in Language Processing
Authors: Andy H. Clark
Abstract:
This research looks into the relationship between language comprehension and production with a specific focus on the role of salience in shaping these processes. Salience, our most immediate perception of what is most probable out of all possible situations and outcomes strongly affects our perception and action in language production and comprehension. This study investigates the impact of geographic and emotional attachments to the target language on the differences in the learners’ comprehension and production abilities. Using quantitative research methods (Qualtrics, SPSS), this study examines preferential choices of two groups of Japanese English language learners: those residing in the United States and those in Japan. By comparing and contrasting these two groups, we hope to gain a better understanding of how salience of linguistics cues influences language processing.Keywords: intercultural pragmatics, salience, production, comprehension, pragmatics, action, perception, cognition
Procedia PDF Downloads 754018 High Pressure Processing of Jackfruit Bulbs: Effect on Color, Nutrient Profile and Enzyme Inactivation
Authors: Jyoti Kumari, Pavuluri Srinivasa Rao
Abstract:
Jackfruit (ArtocarpusheterophyllusL.) is an underutilized yet highly nutritious fruit with unique flavour, known for its therapeutic and culinary properties. Fresh jackfruit bulb has a very short shelf life due to high moisture and sugar content leading to microbial and enzymatic browning, hindering its consumer acceptability and marketability. An attempt has been made for the preservation of the ripe jackfruit bulbs, by the application of high pressure (HP) over a range of 200-500 MPa at ambient temperature for dwell times ranging from 5 to 20 min. The physicochemical properties of jackfruit bulbs such as the pH, TSS, and titrable acidity were not affected by the pressurization process. The ripening index of the fruit bulb also decreased following HP treatment. While the ascorbic acid and antioxidant activity of jackfruit bulb were well retained by high pressure processing (HPP), the total phenols and carotenoids showed a slight increase. The HPP significantly affected the colour and textural properties of jackfruit bulb. High pressure processing was highly effective in reducing the browning index of jackfruit bulbs in comparison to untreated bulbs. The firmness of the bulbs improved upon the pressure treatment with longer dwelling time. The polyphenol oxidase has been identified as the most prominent oxidative enzyme in the jackfruit bulb. The enzymatic activity of polyphenol oxidase and peroxidase were significantly reduced by up to 40% following treatment at 400 MPa/15 min. HPP of jackfruit bulbs at ambient temperatures is shown to be highly beneficial in improving the shelf stability, retaining its nutrient profile, color, and appearance while ensuring the maximum inactivation of the spoilage enzymes.Keywords: antioxidant capacity, ascorbic acid, carotenoids, color, HPP-high pressure processing, jackfruit bulbs, polyphenol oxidase, peroxidase, total phenolic content
Procedia PDF Downloads 1754017 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task
Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes
Abstract:
For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.Keywords: Alzheimer's disease, keystroke logging, matching, writing process
Procedia PDF Downloads 3664016 The Effect of Speech-Shaped Noise and Speaker’s Voice Quality on First-Grade Children’s Speech Perception and Listening Comprehension
Authors: I. Schiller, D. Morsomme, A. Remacle
Abstract:
Children’s ability to process spoken language develops until the late teenage years. At school, where efficient spoken language processing is key to academic achievement, listening conditions are often unfavorable. High background noise and poor teacher’s voice represent typical sources of interference. It can be assumed that these factors particularly affect primary school children, because their language and literacy skills are still low. While it is generally accepted that background noise and impaired voice impede spoken language processing, there is an increasing need for analyzing impacts within specific linguistic areas. Against this background, the aim of the study was to investigate the effect of speech-shaped noise and imitated dysphonic voice on first-grade primary school children’s speech perception and sentence comprehension. Via headphones, 5 to 6-year-old children, recruited within the French-speaking community of Belgium, listened to and performed a minimal-pair discrimination task and a sentence-picture matching task. Stimuli were randomly presented according to four experimental conditions: (1) normal voice / no noise, (2) normal voice / noise, (3) impaired voice / no noise, and (4) impaired voice / noise. The primary outcome measure was task score. How did performance vary with respect to listening condition? Preliminary results will be presented with respect to speech perception and sentence comprehension and carefully interpreted in the light of past findings. This study helps to support our understanding of children’s language processing skills under adverse conditions. Results shall serve as a starting point for probing new measures to optimize children’s learning environment.Keywords: impaired voice, sentence comprehension, speech perception, speech-shaped noise, spoken language processing
Procedia PDF Downloads 1924015 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present
Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir
Abstract:
Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving
Procedia PDF Downloads 754014 DNA Multiplier: A Design Architecture of a Multiplier Circuit Using DNA Molecules
Authors: Hafiz Md. Hasan Babu, Khandaker Mohammad Mohi Uddin, Nitish Biswas, Sarreha Tasmin Rikta, Nuzmul Hossain Nahid
Abstract:
Nanomedicine and bioengineering use biological systems that can perform computing operations. In a biocomputational circuit, different types of biomolecules and DNA (Deoxyribose Nucleic Acid) are used as active components. DNA computing has the capability of performing parallel processing and a large storage capacity that makes it diverse from other computing systems. In most processors, the multiplier is treated as a core hardware block, and multiplication is one of the time-consuming and lengthy tasks. In this paper, cost-effective DNA multipliers are designed using algorithms of molecular DNA operations with respect to conventional ones. The speed and storage capacity of a DNA multiplier are also much higher than a traditional silicon-based multiplier.Keywords: biological systems, DNA multiplier, large storage, parallel processing
Procedia PDF Downloads 2174013 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences
Authors: Yuan-Jye Tseng, Ching-Yen Chen
Abstract:
In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.Keywords: cluster analysis, customer preferences, design evaluation, design for customer preferences, product design
Procedia PDF Downloads 1914012 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps
Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo
Abstract:
With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.Keywords: interactive applications, power management, QoS, Web apps, WebGL
Procedia PDF Downloads 1934011 Improvement of Piezoresistive Pressure Sensor Accuracy by Means of Current Loop Circuit Using Optimal Digital Signal Processing
Authors: Peter A. L’vov, Roman S. Konovalov, Alexey A. L’vov
Abstract:
The paper presents the advanced digital modification of the conventional current loop circuit for pressure piezoelectric transducers. The optimal DSP algorithms of current loop responses by the maximum likelihood method are applied for diminishing of measurement errors. The loop circuit has some additional advantages such as the possibility to operate with any type of resistance or reactance sensors, and a considerable increase in accuracy and quality of measurements to be compared with AC bridges. The results obtained are dedicated to replace high-accuracy and expensive measuring bridges with current loop circuits.Keywords: current loop, maximum likelihood method, optimal digital signal processing, precise pressure measurement
Procedia PDF Downloads 5294010 Influence of Thermal Treatments on Ovomucoid as Allergenic Protein
Authors: Nasser A. Al-Shabib
Abstract:
Food allergens are most common non-native form when exposed to the immune system. Most food proteins undergo various treatments (e.g. thermal or proteolytic processing) during food manufacturing. Such treatments have the potential to impact the chemical structure of food allergens so as to convert them to more denatured or unfolded forms. The conformational changes in the proteins may affect the allergenicity of treated-allergens. However, most allergenic proteins possess high resistance against thermal modification or digestive enzymes. In the present study, ovomucoid (a major allergenic protein of egg white) was heated in phosphate-buffered saline (pH 7.4) at different temperatures, aqueous solutions and on different surfaces for various times. The results indicated that different antibody-based methods had different sensitivities in detecting the heated ovomucoid. When using one particular immunoassay‚ the immunoreactivity of ovomucoid increased rapidly after heating in water whereas immunoreactivity declined after heating in alkaline buffer (pH 10). Ovomucoid appeared more immunoreactive when dissolved in PBS (pH 7.4) and heated on a stainless steel surface. To the best of our knowledge‚ this is the first time that antibody-based methods have been applied for the detection of ovomucoid adsorbed onto different surfaces under various conditions. The results obtained suggest that use of antibodies to detect ovomucoid after food processing may be problematic. False assurance will be given with the use of inappropriate‚ non-validated immunoassays such as those available commercially as ‘Swab’ tests. A greater understanding of antibody-protein interaction after processing of a protein is required.Keywords: ovomucoid, thermal treatment, solutions, surfaces
Procedia PDF Downloads 4494009 Scheduling in Cloud Networks Using Chakoos Algorithm
Authors: Masoumeh Ali Pouri, Hamid Haj Seyyed Javadi
Abstract:
Nowadays, cloud processing is one of the important issues in information technology. Since scheduling of tasks graph is an NP-hard problem, considering approaches based on undeterminisitic methods such as evolutionary processing, mostly genetic and cuckoo algorithms, will be effective. Therefore, an efficient algorithm has been proposed for scheduling of tasks graph to obtain an appropriate scheduling with minimum time. In this algorithm, the new approach is based on making the length of the critical path shorter and reducing the cost of communication. Finally, the results obtained from the implementation of the presented method show that this algorithm acts the same as other algorithms when it faces graphs without communication cost. It performs quicker and better than some algorithms like DSC and MCP algorithms when it faces the graphs involving communication cost.Keywords: cloud computing, scheduling, tasks graph, chakoos algorithm
Procedia PDF Downloads 66