Search results for: Kazakh speech dataset
1304 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography
Authors: O’Day Luke
Abstract:
Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison
Procedia PDF Downloads 1411303 Hand Gesture Detection via EmguCV Canny Pruning
Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae
Abstract:
Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.Keywords: canny pruning, hand recognition, machine learning, skin tracking
Procedia PDF Downloads 1851302 Maternal Mind-Mindedness and Its Association with Attachment: The Case of Arab Infants and Mothers in Israel
Authors: Gubair Tarabeh, Ghadir Zriek, David Oppenheim, Avi Sagi-Schwartz, Nina Koren-Karie
Abstract:
Introduction: Mind-Mindedness (MM) focuses on mothers' attunement to their infant's mental states as reflected in their speech to the infant. Appropriate MM comments are associated with attachment security in individualistic Western societies where parents value their children’s autonomy and independence, and may therefore be more likely to engage in mind-related discourse with their children that highlights individual thoughts, preferences, emotions, and motivations. Such discourse may begin in early infancy, even before infants are likely to understand the semantic meaning of parental speech. Parents in collectivistic societies, by contrast, are thought to emphasize conforming to social norms more than individual goals, and this may lead to parent-child discourse that emphasizes appropriate behavior and compliance with social norms rather than internal mental states of the self and the other. Therefore, the examination of maternal MM and its relationship with attachment in Arab collectivistic culture in Israel was of particular interest. Aims of the study: The goal of the study was to examine whether the associations between MM and attachment in the Arab culture in Israel are the same as in Western samples. An additional goal was to examine whether appropriate and non-attuned MM comments could, together, distinguish among mothers of children in the different attachment classifications. Material and Methods: 76 Arab mothers and their infants between the ages of 12 and 18 months were observed in the Strange Situation Procedure (49 secure (B), 11 ambivalent (C), 14 disorganized (D), and 2 avoidant (A) infants). MM was coded from an 8-minute free-play sequence. Results: Mothers of B infants used more appropriate and less non-attuned MM comments than mothers of D infants, with no significant differences with mothers of C infants. Also, mothers of B infants used less non-attuned MM comments than both mothers of D infants and mothers of C infants. In addition, Mothers of B infants were most likely to show the combination of high appropriate and low non-attuned MM comments; Mothers of D infants were most likely to show the combination of high non-attuned and low appropriate MM comments; and a non-significant trend indicated that mothers of C infants were most likely to show a combination of high appropriate and high non-attuned MM comments. Conclusion: Maternal MM was associated with attachment in the Arab culture in Israel with combinations of appropriate and non-attuned MM comments distinguishing between different attachment classifications.Keywords: attachment, maternal mind-mindedness, Arab culture, collectivistic culture
Procedia PDF Downloads 1541301 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks
Authors: Ahmed Abdullah Ahmed
Abstract:
The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments
Procedia PDF Downloads 5121300 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model
Authors: Gholba Niranjan Dilip, Anil Kumar
Abstract:
Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector
Procedia PDF Downloads 1601299 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System
Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes
Abstract:
The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models
Procedia PDF Downloads 811298 A Genre-Based Approach to the Teaching of Pronunciation
Authors: Marden Silva, Danielle Guerra
Abstract:
Some studies have indicated that pronunciation teaching hasn’t been paid enough attention by teachers regarding EFL contexts. In particular, segmental and suprasegmental features through genre-based approach may be an opportunity on how to integrate pronunciation into a more meaningful learning practice. Therefore, the aim of this project was to carry out a survey on some aspects related to English pronunciation that Brazilian students consider more difficult to learn, thus enabling the discussion of strategies that can facilitate the development of oral skills in English classes by integrating the teaching of phonetic-phonological aspects into the genre-based approach. Notions of intelligibility, fluency and accuracy were proposed by some authors as an ideal didactic sequence. According to their proposals, basic learners should be exposed to activities focused on the notion of intelligibility as well as intermediate students to the notion of fluency, and finally more advanced ones to accuracy practices. In order to test this hypothesis, data collection was conducted during three high school English classes at Federal Center for Technological Education of Minas Gerais (CEFET-MG), in Brazil, through questionnaires and didactic activities, which were recorded and transcribed for further analysis. The genre debate was chosen to facilitate the oral expression of the participants in a freer way, making them answering questions and giving their opinion about a previously selected topic. The findings indicated that basic students demonstrated more difficulty with aspects of English pronunciation than the others. Many of the intelligibility aspects analyzed had to be listened more than once for a better understanding. For intermediate students, the speeches recorded were considerably easier to understand, but nevertheless they found it more difficult to pronounce the words fluently, often interrupting their speech to think about what they were going to say and how they would talk. Lastly, more advanced learners seemed to express their ideas more fluently, but still subtle errors related to accuracy were perceptible in speech, thereby confirming the proposed hypothesis. It was also seen that using genre-based approach to promote oral communication in English classes might be a relevant method, considering the socio-communicative function inherent in the suggested approach.Keywords: EFL, genre-based approach, oral skills, pronunciation
Procedia PDF Downloads 1301297 Personalizing Human Physical Life Routines Recognition over Cloud-based Sensor Data via AI and Machine Learning
Authors: Kaushik Sathupadi, Sandesh Achar
Abstract:
Pervasive computing is a growing research field that aims to acknowledge human physical life routines (HPLR) based on body-worn sensors such as MEMS sensors-based technologies. The use of these technologies for human activity recognition is progressively increasing. On the other hand, personalizing human life routines using numerous machine-learning techniques has always been an intriguing topic. In contrast, various methods have demonstrated the ability to recognize basic movement patterns. However, it still needs to be improved to anticipate the dynamics of human living patterns. This study introduces state-of-the-art techniques for recognizing static and dy-namic patterns and forecasting those challenging activities from multi-fused sensors. Further-more, numerous MEMS signals are extracted from one self-annotated IM-WSHA dataset and two benchmarked datasets. First, we acquired raw data is filtered with z-normalization and denoiser methods. Then, we adopted statistical, local binary pattern, auto-regressive model, and intrinsic time scale decomposition major features for feature extraction from different domains. Next, the acquired features are optimized using maximum relevance and minimum redundancy (mRMR). Finally, the artificial neural network is applied to analyze the whole system's performance. As a result, we attained a 90.27% recognition rate for the self-annotated dataset, while the HARTH and KU-HAR achieved 83% on nine living activities and 90.94% on 18 static and dynamic routines. Thus, the proposed HPLR system outperformed other state-of-the-art systems when evaluated with other methods in the literature.Keywords: artificial intelligence, machine learning, gait analysis, local binary pattern (LBP), statistical features, micro-electro-mechanical systems (MEMS), maximum relevance and minimum re-dundancy (MRMR)
Procedia PDF Downloads 201296 Communicating Meaning through Translanguaging: The Case of Multilingual Interactions of Algerians on Facebook
Authors: F. Abdelhamid
Abstract:
Algeria is a multilingual speech community where individuals constantly mix between codes in spoken discourse. Code is used as a cover term to refer to the existing languages and language varieties which include, among others, the mother tongue of the majority Algerian Arabic, the official language Modern Standard Arabic and the foreign languages French and English. The present study explores whether Algerians mix between these codes in online communication as well. Facebook is the selected platform from which data is collected because it is the preferred social media site for most Algerians and it is the most used one. Adopting the notion of translanguaging, this study attempts explaining how users of Facebook use multilingual messages to communicate meaning. Accordingly, multilingual interactions are not approached from a pejorative perspective but rather as a creative linguistic behavior that multilingual utilize to achieve intended meanings. The study is intended as a contribution to the research on multilingualism online because although an extensive literature has investigated multilingualism in spoken discourse, limited research investigated it in the online one. Its aim is two-fold. First, it aims at ensuring that the selected platform for analysis, namely Facebook, could be a source for multilingual data to enable the qualitative analysis. This is done by measuring frequency rates of multilingual instances. Second, when enough multilingual instances are encountered, it aims at describing and interpreting some selected ones. 120 posts and 16335 comments were collected from two Facebook pages. Analysis revealed that third of the collected data are multilingual messages. Users of Facebook mixed between the four mentioned codes in writing their messages. The most frequent cases are mixing between Algerian Arabic and French and between Algerian Arabic and Modern Standard Arabic. A focused qualitative analysis followed where some examples are interpreted and explained. It seems that Algerians mix between codes when communicating online despite the fact that it is a conscious type of communication. This suggests that such behavior is not a random and corrupted way of communicating but rather an intentional and natural one.Keywords: Algerian speech community, computer mediated communication, languages in contact, multilingualism, translanguaging
Procedia PDF Downloads 1311295 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection
Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra
Abstract:
In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging
Procedia PDF Downloads 861294 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP
Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis
Abstract:
The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.Keywords: chatbot, depression diagnosis, LSTM model, natural language process
Procedia PDF Downloads 681293 The Feminine Disruption of Speech and Refounding of Discourse: Kristeva’s Semiotic Chora and Psychoanalysis
Authors: Kevin Klein-Cardeña
Abstract:
For Julia Kristeva, contra Lacan, the instinctive body refuses to go away within discourse. Neither is the pre-Oedipal stage of maternal fusion vanquished by the emergence of language and with it, the law of the father. On the contrary, Kristeva argues, the pre-symbolic ambivalently haunts the society of speech, simultaneously animating and threatening the very foundations of signification. Kristeva invents the term “the semiotic” to refer to this continual breaking-through of the material unconscious onto the scene of meaning. This presentation examines Kristeva’s semiotic as a theoretical gesture that itself is a disruption of discourse, re-presenting the ‘return of the repressed’ body in theory—-the breaking-through of the unconscious onto the science of meaning. Faced with linguistic theories concerned with abstract sign-systems as well as Lacanian doctrine privileging the linguistic sign unequivocally over the bodily drive, Kristeva’s theoretical corpus issues the message of a psychic remainder that disrupts with a view toward replenishing theoretical accounts of language and sense. Reviewing Semiotic challenge across these two levels (the sense and science of language), the presentation suggests that Kristeva’s offerings constitute a coherent gestalt, providing an account of the feminist nature of her dual intervention. In contrast to other feminist critiques, Kristeva’s gesture hinges on its restoration of the maternal contribution to subjectivity. Against the backdrop of ‘phallogocentric’ and ‘necrophilic’ theories that strip language of a subject and strip the subject of a body, Kristeva recasts linguistic study through a metaphor of life and birthing. Yet the semiotic fragments the subject it produces, dialoguing with an unconscious curtailed by but also exceeding the symbolic order of signification. Linguistics, too, becomes fragmented in the same measure as it is more meaningfully renewed by its confrontation with the semiotic body. It is Kristeva’s own body that issues this challenge, on both sides of the boundary between the theory and the theorized. The Semiotic becomes comprehensible as a project unified by its concern to disrupt and rehabilitate language, the subject, and the scholarly discourses that treat them.Keywords: Julia kristeva, the Semiotic, french feminism, psychoanalysic theory, linguistics
Procedia PDF Downloads 741292 IT-Based Global Healthcare Delivery System: An Alternative Global Healthcare Delivery System
Authors: Arvind Aggarwal
Abstract:
We have developed a comprehensive global healthcare delivery System based on information technology. It has medical consultation system where a virtual consultant can give medical consultation to the patients and Doctors at the digital medical centre after reviewing the patient’s EMR file consisting of patient’s history, investigations in the voice, images and data format. The system has the surgical operation system too, where a remote robotic consultant can conduct surgery at the robotic surgical centre. The instant speech and text translation is incorporated in the software where the patient’s speech and text (language) can be translated into the consultant’s language and vice versa. A consultant of any specialty (surgeon or Physician) based in any country can provide instant health care consultation, to any patient in any country without loss of time. Robotic surgeons based in any country in a tertiary care hospital can perform remote robotic surgery, through patient friendly telemedicine and tele-surgical centres. The patient EMR, financial data and data of all the consultants and robotic surgeons shall be stored in cloud. It is a complete comprehensive business model with healthcare medical and surgical delivery system. The whole system is self-financing and can be implemented in any country. The entire system uses paperless, filmless techniques. This eliminates the use of all consumables thereby reduces substantial cost which is incurred by consumables. The consultants receive virtual patients, in the form of EMR, thus the consultant saves time and expense to travel to the hospital to see the patients. The consultant gets electronic file ready for reporting & diagnosis. Hence time spent on the physical examination of the patient is saved, the consultant can, therefore, spend quality time in studying the EMR/virtual patient and give his instant advice. The time consumed per patient is reduced and therefore can see more number of patients, the cost of the consultation per patients is therefore reduced. The additional productivity of the consultants can be channelized to serve rural patients devoid of doctors.Keywords: e-health, telemedicine, telecare, IT-based healthcare
Procedia PDF Downloads 1791291 EU Innovative Economic Priorities, Contemporary Problems and Challenges of Its Formation
Authors: Gechbaia Badri
Abstract:
The paper discusses in today's world of economic globalization and development of innovative economic integration is one of the issues of the day in the world. The article analyzes the innovation economy development trends in EU, showed the innovation economy formation of the main problems and results, also the development of innovative potential of the economy. The author reckons that the European economy will contribute to the development of innovative economic space of speech in recent years developed a financial and economic crisis.Keywords: European Union, innovative system, innovative development, innovations
Procedia PDF Downloads 3061290 Responsive Integrative Therapeutic Method: Paradigm for Addressing Core Deficits in Autism by Balkibekova
Authors: Balkibekova Venera Serikpaevna
Abstract:
Background: Autism Spectrum Disorder (ASD) poses significant challenges in both diagnosis and treatment. Existing therapeutic interventions often target specific symptoms, necessitating the exploration of alternative approaches. This study investigates the RITM (Rhythm Integration Tapping Music) developed by Balkibekova, aiming to create imitation, social engagement and a wide range of emotions through brain development. Methods: A randomized controlled trial was conducted with 100 participants diagnosed with ASD, aged 1 to 4 years. Participants were randomly assigned to either the RITM therapy group or a control group receiving standard care. The RITM therapy, rooted in tapping rhythm to music such as: marche on the drums, waltz on bells, lullaby on musical triangle, dancing on tambourine, polka on wooden spoons. Therapy sessions were conducted over a 3 year period, with assessments at baseline, midpoint, and post-intervention. Results: Preliminary analyses reveal promising outcomes in the RITM therapy group. Participants demonstrated significant improvements in social interactions, speech understanding, birth of speech, and adaptive behaviors compared to the control group. Careful examination of subgroup analyses provides insights into the differential effectiveness of the RITM approach across various ASD profiles. Conclusions: The findings suggest that RITM therapy, as developed by Balkibekova, holds promise as intervention for ASD. The integrative nature of the approach, addressing multiple domains simultaneously, may contribute to its efficacy. Further research is warranted to validate these preliminary results and explore the long-term impact of RITM therapy on individuals with ASD. This abstract presents a snapshot of the research, emphasizing the significance, methodology, key findings, and implications of the RITM therapy method for consideration in an autism conference.Keywords: RITM therapy, tapping rhythm, autism, mirror neurons, bright emotions, social interactions, communications
Procedia PDF Downloads 641289 A Systematic Review of the Psychometric Properties of Augmentative and Alternative Communication Assessment Tools in Adolescents with Complex Communication Needs
Authors: Nadwah Onwi, Puspa Maniam, Azmawanie A. Aziz, Fairus Mukhtar, Nor Azrita Mohamed Zin, Nurul Haslina Mohd Zin, Nurul Fatehah Ismail, Mohamad Safwan Yusoff, Susilidianamanalu Abd Rahman, Siti Munirah Harris, Maryam Aizuddin
Abstract:
Objective: Malaysia has a growing number of individuals with complex communication needs (CCN). The initiation of augmentative and alternative communication (AAC) intervention may facilitate individuals with CCN to understand and express themselves optimally and actively participate in activities in their daily life. AAC is defined as multimodal use of communication ability to allow individuals to use every mode possible to communicate with others using a set of symbols or systems that may include the symbols, aids, techniques, and strategies. It is consequently critical to evaluate the deficits to inform treatment for AAC intervention. However, no known measurement tools are available to evaluate the user with CCN available locally. Design: A systematic review (SR) is designed to analyze the psychometric properties of AAC assessment for adolescents with CCN published in peer-reviewed journals. Tools are rated by the methodological quality of studies and the psychometric measurement qualities of each tool. Method: A literature search identifying AAC assessment tools with psychometrically robust properties and conceptual framework was considered. Two independent reviewers screened the abstracts and full-text articles and review bibliographies for further references. Data were extracted using standardized forms and study risk of bias was assessed. Result: The review highlights the psychometric properties of AAC assessment tools that can be used by speech-language therapists applicable to be used in the Malaysian context. The work outlines how systematic review methods may be applied to the consideration of published material that provides valuable data to initiate the development of Malay Language AAC assessment tools. Conclusion: The synthesis of evidence has provided a framework for Malaysia Speech-Language therapists in making an informed decision for AAC intervention in our standard operating procedure in the Ministry of Health, Malaysia.Keywords: augmentative and alternative communication, assessment, adolescents, complex communication needs
Procedia PDF Downloads 1511288 Carl Wernicke and the Origin of Neurolinguistics in Breslau: A Case Study in the Domain of the History of Linguistics
Authors: Aneta Daniel
Abstract:
The subject of the study is the exploration of the origins and dynamics of the development of language studies, which have been labelled as neurolinguistics. It is worth mentioning that the origins of neurolinguistics are to be found in the research conducted by German scientists before the Second World War in Breslau Universität (presently Wroclaw). The dominant figure in these studies was professor Carl Wernicke, whose students continued and creatively developed projects of their master within this area. Professor Carl Wernicke, a German physician, anatomist, psychiatrist, and neuropathologist, is primarily known for his influential research on aphasia. His research, as well as those conducted by professor Paul Broca, has led to breakthroughs in the location of brain functions, particularly speech. Years later the theses of the pioneers of cognitive neurology (Carl Wernicke and Paul Broca) were developed by other neurolinguists. The main objective of the investigation is the reconstruction of the group of scientists –the students of Carl Wernicke– who contributed to the development of neurolinguistics. The scholars were mainly neurologists and psychiatrists and dealt with the branch of science that had not been named neurolinguistics at that time. The profiles of the scholars will be analysed and presented as the members of the group of researchers who have contributed to the breakthroughs in psychology and neuroscience. The research material consists of archival records documenting the research of professor Carl Wernicke and the researchers from Breslau (presently Wroclaw) which is one of the fastest growing cities in Europe. In 1870, when Carl Wernicke became the medical doctor, Breslau was full of cultural events: festivals and circus shows were held in the city center. Today we can come back to these events due to 'Breslauer Zeitung (1870)', which precisely describes all the events that took place on particular days. It is worth noting that those were the beginnings of antisemitism in Breslau. Many theses and articles that have survived in the libraries in Wroclaw and all over the world contribute to the development of neuroscience. The history of research on the brain and speech analysis, including the history of psychology and neuroscience, areas from which neurolinguistics is derived, will be presented.Keywords: Aphasia, brain injury, Carl Wernicke, language, neurolinguistics
Procedia PDF Downloads 3931287 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices
Authors: Pratik Dhabal Deo, Manoj P.
Abstract:
With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.Keywords: distortion, metrics, performance, resolution, video quality assessment
Procedia PDF Downloads 2031286 Sociology of Vis and Ramin
Authors: Farzane Yusef Ghanbari
Abstract:
A sociological analysis on the ancient poetry of Vis and Ramin reveals important points about the political, cultural, and social conditions of the Iranian ancient history. The reciprocal relationship between the effect and structure of society helps the understanding and interpretation of the work. Therefore, informed by the Goldman genetic structuralism and through a glance at social epistemology, this study attempts to explain the role of spell in shaping the social knowledge of ancient people. The results suggest that due to the lack of a central government, and secularism in politics and freedom of speech and opinion, such romantic stories as Vis and Ramin, with a focal female character, has emerged.Keywords: persian literature, Vis and Ramin, sociology, developmental structuralism
Procedia PDF Downloads 4301285 Second Language Perception of Japanese /Cju/ and /Cjo/ Sequences by Mandarin-Speaking Learners of Japanese
Authors: Yili Liu, Honghao Ren, Mariko Kondo
Abstract:
In the field of second language (L2) speech learning, it is well-known that that learner’s first language (L1) phonetic and phonological characteristics will be transferred into their L2 production and perception, which lead to foreign accent. For L1 Mandarin learners of Japanese, the confusion of /u/ and /o/ in /CjV/ sequences has been observed in their utterance frequently. L1 transfer is considered to be the cause of this issue, however, other factors which influence the identification of /Cju/ and /Cjo/ sequences still under investigation. This study investigates the perception of Japanese /Cju/ and /Cjo/ units by L1 Mandarin learners of Japanese. It further examined whether learners’ proficiency, syllable position, phonetic features of preceding consonants and background noise affect learners’ performance in perception. Fifty-two Mandarin-speaking learners of Japanese and nine native Japanese speakers were recruited to participate in an identification task. Learners were divided into beginner, intermediate and advanced level according to their Japanese proficiency. The average correct rate was used to evaluate learners’ perceptual performance. Furthermore, the comparison of the correct rate between learners’ groups and the control group was conducted as well to examine learners’ nativelikeness. Results showed that background noise tends to pose an adverse effect on distinguishing /u/ and /o/ in /CjV/ sequences. Secondly, Japanese proficiency has no influence on learners’ perceptual performance in the quiet and in background noise. Then all learners did not reach a native-like level without the distraction of noise. Beginner level learners performed less native-like, although higher level learners appeared to have achieved nativelikeness in the multi-talker babble noise. Finally, syllable position tends to affect distinguishing /Cju/ and /Cjo/ only under the noisy condition. Phonetic features of preceding consonants did not impact learners’ perception in any listening conditions. Findings in this study can give an insight into a further understanding of Japanese vowel acquisition by L1 Mandarin learners of Japanese. In addition, this study indicates that L1 transfer is not the only explanation for the confusion of /u/ and /o/ in /CjV/ sequences, factors such as listening condition and syllable position are also needed to take into consideration in future research. It also suggests the importance of perceiving speech in a noisy environment, which is close to the actual conversation required more attention to pedagogy.Keywords: background noise, Chinese learners of Japanese, /Cju/ and /Cjo/ sequences, second language perception
Procedia PDF Downloads 1591284 Communicative Strategies in Colombian Political Speech: On the Example of the Speeches of Francia Marquez
Authors: Danila Arbuzov
Abstract:
In this article the author examines the communicative strategies used in the Colombian political discourse, following the example of the speeches of the Vice President of Colombia Francia Marquez, who took office in 2022 and marked a new development vector for the Colombian nation. The lexical and syntactic means are analyzed to achieve the communicative objectives. The material presented may be useful for those who are interested in investigating various aspects of discursive linguistics, particularly political discourse, as well as the implementation of communicative strategies in certain types of discourse.Keywords: political discourse, communication strategies, Colombian political discourse, Colombia, manipulation
Procedia PDF Downloads 1131283 Classroom Discourse and English Language Teaching: Issues, Importance, and Implications
Authors: Rabi Abdullahi Danjuma, Fatima Binta Attahir
Abstract:
Classroom discourse is important, and it is worth examining what the phenomena is and how it helps both the teacher and students in a classroom situation. This paper looks at the classroom as a traditional social setting which has its own norms and values. The paper also explains what discourse is, as extended communication in speech or writing often interactively dealing with some particular topics. It also discusses classroom discourse as the language which teachers and students use to communicate with each other in a classroom situation. The paper also looks at some strategies for effective classroom discourse. Finally, implications and recommendations were drawn.Keywords: classroom, discourse, learning, student, strategies, communication
Procedia PDF Downloads 6071282 Reading and Teaching Poetry as Communicative Discourse: A Pragma-Linguistic Approach
Authors: Omnia Elkommos
Abstract:
Language is communication on several discourse levels. The target of teaching a language and the literature of a foreign language is to communicate a message. Reading, appreciating, analysing, and interpreting poetry as a sophisticated rhetorical expression of human thoughts, emotions, and philosophical messages is more feasible through the use of linguistic pragmatic tools from a communicative discourse perspective. The poet's intention, speech act, illocutionary act, and perlocutionary goal can be better understood when communicative situational context as well as linguistic discourse structure theories are employed. The use of linguistic theories in the teaching of poetry is, therefore, intrinsic to students' comprehension, interpretation, and appreciation of poetry of the different ages. It is the purpose of this study to show how both teachers as well as students can apply these linguistic theories and tools to dramatic poetic texts for an engaging, enlightening, and effective interpretation and appreciation of the language. Theories drawn from areas of pragmatics, discourse analysis, embedded discourse level, communicative situational context, and other linguistic approaches were applied to selected poetry texts from the different centuries. Further, in a simple statistical count of the number of poems with dialogic dramatic discourse with embedded two or three levels of discourse in different anthologies outweighs the number of descriptive poems with a one level of discourse, between the poet and the reader. Poetry is thus discourse on one, two, or three levels. It is, therefore, recommended that teachers and students in the area of ESL/EFL use the linguistics theories for a better understanding of poetry as communicative discourse. The practice of applying these linguistic theories in classrooms and in research will allow them to perceive the language and its linguistic, social, and cultural aspect. Texts will become live illocutionary acts with a perlocutionary acts goal rather than mere literary texts in anthologies.Keywords: coda, commissives, communicative situation, context of culture, context of reference, context of utterance, dialogue, directives, discourse analysis, dramatic discourse interaction, duologue, embedded discourse levels, language for communication, linguistic structures, literary texts, poetry, pragmatic theories, reader response, speech acts (macro/micro), stylistics, teaching literature, TEFL, terms of address, turn-taking
Procedia PDF Downloads 3281281 Emotion Recognition in Video and Images in the Wild
Authors: Faizan Tariq, Moayid Ali Zaidi
Abstract:
Facial emotion recognition algorithms are expanding rapidly now a day. People are using different algorithms with different combinations to generate best results. There are six basic emotions which are being studied in this area. Author tried to recognize the facial expressions using object detector algorithms instead of traditional algorithms. Two object detection algorithms were chosen which are Faster R-CNN and YOLO. For pre-processing we used image rotation and batch normalization. The dataset I have chosen for the experiments is Static Facial Expression in Wild (SFEW). Our approach worked well but there is still a lot of room to improve it, which will be a future direction.Keywords: face recognition, emotion recognition, deep learning, CNN
Procedia PDF Downloads 1871280 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 3381279 Input and Interaction as Training for Cognitive Learning: Variation Sets Influence the Sudden Acquisition of Periphrastic estar 'to be' + verb + -ndo*
Authors: Mary Rosa Espinosa-Ochoa
Abstract:
Some constructions appear suddenly in children’s speech and are productive from the beginning. These constructions are supported by others, previously acquired, with which they share semantic and pragmatic features. Thus, for example, the acquisition of the passive voice in German is supported by other constructions with which it shares the lexical verb sein (“to be”). This also occurs in Spanish, in the acquisition of the progressive aspectual periphrasis estar (“to be”) + verb root + -ndo (present participle), supported by locative constructions acquired earlier with the same verb. The periphrasis shares with the locative constructions not only the lexical verb estar, but also pragmatic relations. Both constructions can be used to answer the question ¿Dónde está? (“Where is he/she/it?”), whose answer could be either Está aquí (“He/she/it is here”) or Se está bañando (“He/she/it is taking a bath”).This study is a corpus-based analysis of two children (1;08-2;08) and the input directed to them: it proposes that the pragmatic and semantic support from previously-acquired constructions comes from the input, during interaction with others. This hypothesis is based on analysis of constructions with estar, whose use to express temporal change (which differentiates it from its counterpart ser [“to be”]), is given in variation sets, similar to those described by Küntay and Slobin (2002), that allow the child to perceive the change of place experienced by nouns that function as its grammatical subject. For example, at different points during a bath, the mother says: El jabón está aquí “The soap is here” (beginning of bath); five minutes later, the soap has moved, and the mother says el jabón está ahí “the soap is there”; the soap moves again later on and she says: el jabón está abajo de ti “the soap is under you”. “The soap” is the grammatical subject of all of these utterances. The Spanish verb + -ndo is a progressive phase aspect encoder of a dynamic state that generates a token. The verb + -ndo is also combined with verb estar to encode. It is proposed here that the phases experienced in interaction with the adult, in events related to the verb estar, allow a child to generate this dynamicity and token reading of the verb + -ndo. In this way, children begin to produce the periphrasis suddenly and productively, even though neither the periphrasis nor the verb + -ndo itself are frequent in adult speech.Keywords: child language acquisition, input, variation sets, Spanish language
Procedia PDF Downloads 1491278 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 1781277 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 1641276 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study
Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu
Abstract:
Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.Keywords: dysphagia, teleassessment, challenges, Indian SLP
Procedia PDF Downloads 1361275 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 89