Search results for: semantic dementia
487 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation
Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam
Abstract:
Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model
Procedia PDF Downloads 111486 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 586485 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales
Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias
Abstract:
Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline
Procedia PDF Downloads 80484 Graph-Based Semantical Extractive Text Analysis
Authors: Mina Samizadeh
Abstract:
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis
Procedia PDF Downloads 70483 Ontological Modeling Approach for Statistical Databases Publication in Linked Open Data
Authors: Bourama Mane, Ibrahima Fall, Mamadou Samba Camara, Alassane Bah
Abstract:
At the level of the National Statistical Institutes, there is a large volume of data which is generally in a format which conditions the method of publication of the information they contain. Each household or business data collection project includes a dissemination platform for its implementation. Thus, these dissemination methods previously used, do not promote rapid access to information and especially does not offer the option of being able to link data for in-depth processing. In this paper, we present an approach to modeling these data to publish them in a format intended for the Semantic Web. Our objective is to be able to publish all this data in a single platform and offer the option to link with other external data sources. An application of the approach will be made on data from major national surveys such as the one on employment, poverty, child labor and the general census of the population of Senegal.Keywords: Semantic Web, linked open data, database, statistic
Procedia PDF Downloads 174482 'Caucasian Mountaineer / Scottish Highlander': Correlation between Semantics and Culture
Authors: Natalia M. Nepomniashchikh
Abstract:
The research focuses on Russian and English linguoculturemes Caucasian mountaineer and Scottish Highlander, the effort of comparative-contrastive analysis was made. In order to reach the aim, the analysis of the vocabulary definitions of the concepts under consideration was taken, which made it possible to build the lexical-semantic fields of both lexical items in Russian and English. This stage of research helped to turn to the linguistic-cultural fields construction. To build these fields, literary pieces containing the concepts under consideration and the items directly related to them were taken from the works about the Caucasus mountains and mountaineers living there by M. Yu. Lermontov and the ones by W. Scott devoted to the Scottish Highlands and their inhabitants. All collected data was systematized in schemes and tables reflecting the differences and intercrossing areas.Keywords: lexemes, lexical items, lexical-semantic field, linguistic-cultural field, linguoculturemes
Procedia PDF Downloads 231481 N400 Investigation of Semantic Priming Effect to Symbolic Pictures in Text
Authors: Thomas Ousterhout
Abstract:
The purpose of this study was to investigate if incorporating meaningful pictures of gestures and facial expressions in short sentences of text could supplement the text with enough semantic information to produce and N400 effect when probe words incongruent to the picture were subsequently presented. Event-related potentials (ERPs) were recorded from a 14-channel commercial grade EEG headset while subjects performed congruent/incongruent reaction time discrimination tasks. Since pictures of meaningful gestures have been shown to be semantically processed in the brain in a similar manner as words are, it is believed that pictures will add supplementary information to text just as the inclusion of their equivalent synonymous word would. The hypothesis is that when subjects read the text/picture mixed sentences, they will process the images and words just like in face-to-face communication and therefore probe words incongruent to the image will produce an N400.Keywords: EEG, ERP, N400, semantics, congruency, facilitation, Emotiv
Procedia PDF Downloads 258480 A Supervised Face Parts Labeling Framework
Authors: Khalil Khan, Ikram Syed, Muhammad Ehsan Mazhar, Iran Uddin, Nasir Ahmad
Abstract:
Face parts labeling is the process of assigning class labels to each face part. A face parts labeling method (FPL) which divides a given image into its constitutes parts is proposed in this paper. A database FaceD consisting of 564 images is labeled with hand and make publically available. A supervised learning model is built through extraction of features from the training data. The testing phase is performed with two semantic segmentation methods, i.e., pixel and super-pixel based segmentation. In pixel-based segmentation class label is provided to each pixel individually. In super-pixel based method class label is assigned to super-pixel only – as a result, the same class label is given to all pixels inside a super-pixel. Pixel labeling accuracy reported with pixel and super-pixel based methods is 97.68 % and 93.45% respectively.Keywords: face labeling, semantic segmentation, classification, face segmentation
Procedia PDF Downloads 255479 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 132478 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech
Authors: Monica Gonzalez Machorro
Abstract:
Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment
Procedia PDF Downloads 127477 Text Similarity in Vector Space Models: A Comparative Study
Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge
Abstract:
Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.Keywords: big data, patent, text embedding, text similarity, vector space model
Procedia PDF Downloads 175476 3D-Vehicle Associated Research Fields for Smart City via Semantic Search Approach
Authors: Haluk Eren, Mucahit Karaduman
Abstract:
This paper presents 15-year trends for scientific studies in a scientific database considering 3D and vehicle words. Two words are selected to find their associated publications in IEEE scholar database. Both of keywords are entered individually for the years 2002, 2012, and 2016 on the database to identify the preferred subjects of researchers in same years. We have classified closer research fields after searching and listing. Three years (2002, 2012, and 2016) have been investigated to figure out progress in specified time intervals. The first one is assumed as the initial progress in between 2002-2012, and the second one is in 2012-2016 that is fast development duration. We have found very interesting and beneficial results to understand the scholars’ research field preferences for a decade. This information will be highly desirable in smart city-based research purposes consisting of 3D and vehicle-related issues.Keywords: Vehicle, three-dimensional, smart city, scholarly search, semantic
Procedia PDF Downloads 328475 Real-Time Episodic Memory Construction for Optimal Action Selection in Cognitive Robotics
Authors: Deon de Jager, Yahya Zweiri, Dimitrios Makris
Abstract:
The three most important components in the cognitive architecture for cognitive robotics is memory representation, memory recall, and action-selection performed by the executive. In this paper, action selection, performed by the executive, is defined as a memory quantification and optimization process. The methodology describes the real-time construction of episodic memory through semantic memory optimization. The optimization is performed by set-based particle swarm optimization, using an adaptive entropy memory quantification approach for fitness evaluation. The performance of the approach is experimentally evaluated by simulation, where a UAV is tasked with the collection and delivery of a medical package. The experiments show that the UAV dynamically uses the episodic memory to autonomously control its velocity, while successfully completing its mission.Keywords: cognitive robotics, semantic memory, episodic memory, maximum entropy principle, particle swarm optimization
Procedia PDF Downloads 156474 On the Framework of Contemporary Intelligent Mathematics Underpinning Intelligent Science, Autonomous AI, and Cognitive Computers
Authors: Yingxu Wang, Jianhua Lu, Jun Peng, Jiawei Zhang
Abstract:
The fundamental demand in contemporary intelligent science towards Autonomous AI (AI*) is the creation of unprecedented formal means of Intelligent Mathematics (IM). It is discovered that natural intelligence is inductively created rather than exhaustively trained. Therefore, IM is a family of algebraic and denotational mathematics encompassing Inference Algebra, Real-Time Process Algebra, Concept Algebra, Semantic Algebra, Visual Frame Algebra, etc., developed in our labs. IM plays indispensable roles in training-free AI* theories and systems beyond traditional empirical data-driven technologies. A set of applications of IM-driven AI* systems will be demonstrated in contemporary intelligence science, AI*, and cognitive computers.Keywords: intelligence mathematics, foundations of intelligent science, autonomous AI, cognitive computers, inference algebra, real-time process algebra, concept algebra, semantic algebra, applications
Procedia PDF Downloads 61473 Experiences of Social Participation among Community Elderly with Mild Cognitive Impairment: A Qualitative Research
Abstract:
Mild cognitive impairment (MCI) is a clinical stage that occurs between normal aging and dementia. Although MCI increases the risk of developing dementia, individuals with MCI may maintain stable cognitive function and even recover to a typical cognitive state. An intervention to prevent or delay the progression to dementia in individuals with MCI may involve promoting social engagement. Social participation is the engagement in socially relevant social exchanges and meaningful activities. Older adults with MCI may encounter restricted cognitive abilities, mood changes, and behavioral difficulties during social participation, influencing their willingness to engage. Therefore, this study aims to employ qualitative research methods to gain an in-depth comprehension of the authentic social participation experiences of older adults with mild cognitive impairment, which will establish a foundation for designing appropriate intervention programs. A phenomenological research was conducted. The study participants were selected using the purposive sampling method in combination with the maximum differentiation sampling strategy. Face-to-face semistructured interviews were conducted among 12 elderly individuals suffering from mild cognitive impairment in a community in Zhengzhou City from May to July 2023. Colaizzi 7-step method was used to analyze the data and extract the theme. The real experience of social participation in older adults with mild cognitive impairment can be summarized into 3 themes: (1) a single social relationship but a strong desire to participate, (2) a dual experience of social participation with both positive and negative aspects, (3) multiple barriers to social participation, including impaired memory capacity, heavy family responsibilities and lack of infrastructure. The study found that elderly individuals with mild cognitive impairment and one social interaction display an increased desire to engage in society. To improve social participation levels and reduce cognitive function decline, healthcare providers should work with relevant government agencies and the community to create a comprehensive social participation system. It is important for healthcare providers to note the social participation status of the elderly with mild cognitive impairment.Keywords: mild cognitive impairment, the elderly, social participation, qualitative research
Procedia PDF Downloads 92472 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 158471 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks
Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft
Abstract:
Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.Keywords: autonomous agricultural machines, deep learning, safety, visual perception
Procedia PDF Downloads 396470 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning
Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath
Abstract:
The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.Keywords: BLIP, fMRI, latent diffusion model, neural perception.
Procedia PDF Downloads 68469 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies
Authors: Cornelia-Eugenia Munteanu
Abstract:
The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology
Procedia PDF Downloads 514468 Training for Search and Rescue Teams: Online Training for SAR Teams to Locate Lost Persons with Dementia Using Drones
Authors: Dalia Hanna, Alexander Ferworn
Abstract:
This research provides detailed proposed training modules for the public safety teams and, specifically, SAR teams responsible for search and rescue operations related to finding lost persons with dementia. Finding a lost person alive is the goal of this training. Time matters if a lost person is to be found alive. Finding lost people living with dementia is quite challenging, as they are unaware they are lost and will not seek help. Even a small contribution to SAR operations could contribute to saving a life. SAR operations will always require expert professional and human volunteers. However, we can reduce their time, save lives, and reduce costs by providing practical training that is based on real-life scenarios. The content for the proposed training is based on the research work done by the researcher in this area. This research has demonstrated that, based on utilizing drones, the algorithmic approach could support a successful search outcome. Understanding the behavior of the lost person, learning where they may be found, predicting their survivability, and automating the search are all contributions of this work, founded in theory and demonstrated in practice. In crisis management, human behavior constitutes a vital aspect in responding to the crisis; the speed and efficiency of the response often get affected by the difficulty of the context of the operation. Therefore, training in this area plays a significant role in preparing the crisis manager to manage the emotional aspects that lead to decision-making in these critical situations. Since it is crucial to gain high-level strategic choices and the ability to apply crisis management procedures, simulation exercises become central in training crisis managers to gain the needed skills to respond critically to these events. The training will enhance the responders’ ability to make decisions and anticipate possible consequences of their actions through flexible and revolutionary reasoning in responding to the crisis efficiently and quickly. As adult learners, search and rescue teams will be approaching training and learning by taking responsibility of the learning process, appreciate flexible learning and as contributors to the teaching and learning happening during that training. These are all characteristics of adult learning theories. The learner self-reflects, gathers information, collaborates with others and is self-directed. One of the learning strategies associated with adult learning is effective elaboration. It helps learners to remember information in the long term and use it in situations where it might be appropriate. It is also a strategy that can be taught easily and used with learners of different ages. Designers must design reflective activities to improve the student’s intrapersonal awareness.Keywords: training, OER, dementia, drones, search and rescue, adult learning, UDL, instructional design
Procedia PDF Downloads 108467 Semantic Platform for Adaptive and Collaborative e-Learning
Authors: Massra M. Sabeima, Myriam lamolle, Mohamedade Farouk Nanne
Abstract:
Adapting the learning resources of an e-learning system to the characteristics of the learners is an important aspect to consider when designing an adaptive e-learning system. However, this adaptation is not a simple process; it requires the extraction, analysis, and modeling of user information. This implies a good representation of the user's profile, which is the backbone of the adaptation process. Moreover, during the e-learning process, collaboration with similar users (same geographic province or knowledge context) is important. Productive collaboration motivates users to continue or not abandon the course and increases the assimilation of learning objects. The contribution of this work is the following: we propose an adaptive e-learning semantic platform to recommend learning resources to learners, using ontology to model the user profile and the course content, furthermore an implementation of a multi-agent system able to progressively generate the learning graph (taking into account the user's progress, and the changes that occur) for each user during the learning process, and to synchronize the users who collaborate on a learning object.Keywords: adaptative learning, collaboration, multi-agent, ontology
Procedia PDF Downloads 175466 Syntactic, Semantic, and Pragmatic Rationalization of Modal Auxiliary Verbs in Akan
Authors: Joana Portia Sakyi
Abstract:
The uniqueness of auxiliary verbs and their contribution to grammar as constituents, which act as preverbs to supply additional grammatical or functional meanings to clauses, are well established. Functionally, they relate clauses to tense, aspect, mood, voice, emphasis, and modality, along with the main verbs conveying the appropriate lexical content. There has been an issue in Akan grammar vis-à-vis the status of auxiliary verbs, in terms of whether Akan has auxiliaries or not and even which forms are to be regarded as auxiliaries. We investigate the syntactic, semantic, and pragmatic components of expressions and claim that Akan has auxiliary verbs that contribute the functional or grammatical meaning of modality, tense/aspect, etc., to clauses they occur in. Essentially, we use a self-created corpus data to consider the affix bέ- ‘may’, ‘must’, ‘should’; the form tùmí ‘can’, ‘be able to’; mà ‘to let’, ‘to allow’, ‘to permit’, ‘to make’, or ‘to cause’ someone to do something; the multi-word forms ὲsὲ sέ ‘must’, ‘should’ or ‘have to’ and ètwà sέ ‘must’, ‘should’ or ‘have to’, and assert that they are legitimate modal auxiliaries conveying epistemic, deontic, and dynamic modalities, as well as other meanings in the language.Keywords: Akan, modality, modal auxiliaries, semantics
Procedia PDF Downloads 77465 Vector-Based Analysis in Cognitive Linguistics
Authors: Chuluundorj Begz
Abstract:
This paper presents the dynamic, psycho-cognitive approach to study of human verbal thinking on the basis of typologically different languages /as a Mongolian, English and Russian/. Topological equivalence in verbal communication serves as a basis of Universality of mental structures and therefore deep structures. Mechanism of verbal thinking consisted at the deep level of basic concepts, rules for integration and classification, neural networks of vocabulary. In neuro cognitive study of language, neural architecture and neuro psychological mechanism of verbal cognition are basis of a vector-based modeling. Verbal perception and interpretation of the infinite set of meanings and propositions in mental continuum can be modeled by applying tensor methods. Euclidean and non-Euclidean spaces are applied for a description of human semantic vocabulary and high order structures.Keywords: Euclidean spaces, isomorphism and homomorphism, mental lexicon, mental mapping, semantic memory, verbal cognition, vector space
Procedia PDF Downloads 519464 A Chinese Nested Named Entity Recognition Model Based on Lexical Features
Abstract:
In the field of named entity recognition, most of the research has been conducted around simple entities. However, for nested named entities, which still contain entities within entities, it has been difficult to identify them accurately due to their boundary ambiguity. In this paper, a hierarchical recognition model is constructed based on the grammatical structure and semantic features of Chinese text for boundary calculation based on lexical features. The analysis is carried out at different levels in terms of granularity, semantics, and lexicality, respectively, avoiding repetitive work to reduce computational effort and using the semantic features of words to calculate the boundaries of entities to improve the accuracy of the recognition work. The results of the experiments carried out on web-based microblogging data show that the model achieves an accuracy of 86.33% and an F1 value of 89.27% in recognizing nested named entities, making up for the shortcomings of some previous recognition models and improving the efficiency of recognition of nested named entities.Keywords: coarse-grained, nested named entity, Chinese natural language processing, word embedding, T-SNE dimensionality reduction algorithm
Procedia PDF Downloads 128463 The Cognitive Perspective on Arabic Spatial Preposition ‘Ala
Authors: Zaqiatul Mardiah, Afdol Tharik Wastono, Abdul Muta'ali
Abstract:
In general, the Arabic preposition ‘ala encodes the sense of UP-DOWN schema. However, the use of the preposition ‘ala can has many extended schemas that still have relation to its primary sense. In this paper, we show how the framework of cognitive linguistics (CL) based on image schemas can be applied to analyze the spatial semantic of the use of preposition ‘ala in the horizontal and vertical axes. The preposition ‘ala is usually used in the locative sense in which one physical entity is UP-DOWN relation to another physical entity. In spite of that, the cognitive analysis of ‘ala justifies the use of this preposition in many situations to seemingly encode non-up down-related spatial relations, and non-physical relation. This uncovers some of the unsolved issues concerning prepositions in general and the Arabic prepositions in particular the use of ‘ala as a sample. Using the Arabic corpus data, we reveal that in many cases and situations, the use of ‘ala is extended to depict relations other than the ones where the Trajector (TR) is actually in up-down relation to the Landmark (LM). The instances analyzed in this paper show that ‘ala encodes not only the spatial relations in which the TR and the LM are horizontally or vertically related to each other, but also non-spatial relations.Keywords: image schema, preposition, spatial semantic, up-down relation
Procedia PDF Downloads 148462 Trigonella foenum-graecum Seeds Extract as Therapeutic Candidate for Treatment of Alzheimer's Disease
Authors: Mai M. Farid, Ximeng Yang, Tomoharu Kuboyama, Yuna Inada, Chihiro Tohda
Abstract:
Intro: Trigonella foenum-graecum (Fenugreek), from Fabaceae family is a well-known plant traditionally used as food and medicine. Many pharmacological effects of Trigonella foenum- graecum seeds extract (TF extract) were evaluated such as anti-diabetic, anti-tumor and anti-dementia effects using in vivo models. Regarding the anti-dementia effects of TF extract, diabetic rats, aluminum chloride-induced amnesia rats and scopolamine-injected mice were used previously for evaluation, which are not well established as Alzheimer’s disease models. In addition, those previous studies, active constituents in TF extract for memory function were not identified. Method: This study aimed to clarify the effect of TF extract on Alzheimer’s disease model, 5XFAD mouse that overexpresses mutated APP and PS1 genes and determine the major active constituent in the brain after oral intake of TF extract. Results: Trigonelline was detected in the cerebral cortex of 5XFAD mice after 24 hours of oral administration of TF extract by LC-MS/MS. Oral administration of TF extract for 17 days improved object location memory in 5XFAD mice. Conclusion: These results suggest that TF extract and its active constituents could be an expected therapeutic candidate for Alzheimer’s disease.Keywords: Alzheimer's disease, LC-MS/MS, memory recovery, Trigonella foenum-graecum Seeds, 5XFAD mice
Procedia PDF Downloads 147461 A Comparative Semantic Network Study between Chinese and Western Festivals
Authors: Jianwei Qian, Rob Law
Abstract:
With the expansion of globalization and the increment of market competition, the festival, especially the traditional one, has demonstrated its vitality under the new context. As a new tourist attraction, festivals play a critically important role in promoting the tourism economy, because the organization of a festival can engage more tourists, generate more revenues and win a wider media concern. However, in the current stage of China, traditional festivals as a way to disseminate national culture are undergoing the challenge of foreign festivals and the related culture. Different from those special events created solely for developing economy, traditional festivals have their own culture and connotation. Therefore, it is necessary to conduct a study on not only protecting the tradition, but promoting its development as well. This study conducts a comparative study of the development of China’s Valentine’s Day and Western Valentine’s Day under the Chinese context and centers on newspaper reports in China from 2000 to 2016. Based on the literature, two main research focuses can be established: one is concerned about the festival’s impact and the other is about tourists’ motivation to engage in a festival. Newspaper reports serve as the research discourse and can help cover the two focal points. With the assistance of content mining techniques, semantic networks for both Days are constructed separately to help depict the status quo of these two festivals in China. Based on the networks, two models are established to show the key component system of traditional festivals in the hope of perfecting the positive role festival tourism plays in the promotion of economy and culture. According to the semantic networks, newspaper reports on both festivals have similarities and differences. The difference is mainly reflected in its cultural connotation, because westerners and Chinese may show their love in different ways. Nevertheless, they share more common points in terms of economy, tourism, and society. They also have a similar living environment and stakeholders. Thus, they can be promoted together to revitalize some traditions in China. Three strategies are proposed to realize the aforementioned aim. Firstly, localize international festivals to suit the Chinese context to make it function better. Secondly, facilitate the internationalization process of traditional Chinese festivals to receive more recognition worldwide. Finally, allow traditional festivals to compete with foreign ones to help them learn from each other and elucidate the development of other festivals. It is believed that if all these can be realized, not only the traditional Chinese festivals can obtain a more promising future, but foreign ones are the same as well. Accordingly, the paper can contribute to the theoretical construction of festival images by the presentation of the semantic network. Meanwhile, the identified features and issues of festivals from two different cultures can enlighten the organization and marketing of festivals as a vital tourism activity. In the long run, the study can enhance the festival as a key attraction to keep the sustainable development of both the economy and the society.Keywords: Chinese context, comparative study, festival tourism, semantic network analysis, valentine’s day
Procedia PDF Downloads 232460 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 23459 Multimodal Discourse, Logic of the Analysis of Transmedia Strategies
Authors: Bianca Suárez Puerta
Abstract:
Multimodal discourse refers to a method of study the media continuum between reality, screens as a device, audience, author, and media as a production from the audience. For this study we used semantic differential, a method proposed in the sixties by Osgood, Suci and Tannenbaum, starts from the assumption that under each particular way of perceiving the world, in each singular idea, there is a common cultural meaning that organizes experiences. In relation to these shared symbolic dimension, this method has had significant results, as it focuses on breaking down the meaning of certain significant acts into series of statements that place the subjects in front of some concepts. In Colombia, in 2016, a tool was designed to measure the meaning of a multimodal production, specially the acts of sense of transmedia productions that managed to receive funds from the Ministry of ICT of Colombia, and also, to analyze predictable patterns that can be found in calls and funds aimed at the production of culture in Colombia, in the context of the peace agreement, as a request for expressions from a hegemonic place, seeking to impose a worldview.Keywords: semantic differential, semiotics, transmedia, critical analysis of discourse
Procedia PDF Downloads 205458 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 125