Search results for: speech signal
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2312

Search results for: speech signal

1412 The Formation of Mutual Understanding in Conversation: An Embodied Approach

Authors: Haruo Okabayashi

Abstract:

The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.

Keywords: embodied approach, finger plethysmogram, mutual understanding, nonlinear phenomenon

Procedia PDF Downloads 258
1411 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 556
1410 Patient-Friendly Hand Gesture Recognition Using AI

Authors: K. Prabhu, K. Dinesh, M. Ranjani, M. Suhitha

Abstract:

During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager.

Keywords: nodeMCU, AI technology, gesture, patient

Procedia PDF Downloads 151
1409 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 226
1408 Highly Realistic Facial Expressions of Anthropomorphic Social Agent as a Factor in Solving the 'Uncanny Valley' Problem

Authors: Daniia Nigmatullina, Vlada Kugurakova, Maxim Talanov

Abstract:

We present a methodology and our plans of anthropomorphic social agent visualization. That includes creation of three-dimensional model of the virtual companion's head and its facial expressions. Talking Head is a cross-disciplinary project of developing of the human-machine interface with cognitive functions. During the creation of a realistic humanoid robot or a character, there might be the ‘uncanny valley’ problem. We think about this phenomenon and its possible causes. We are going to overcome the ‘uncanny valley’ by increasing of realism. This article discusses issues that should be considered when creating highly realistic characters (particularly the head), their facial expressions and speech visualization.

Keywords: anthropomorphic social agent, facial animation, uncanny valley, visualization, 3D modeling

Procedia PDF Downloads 283
1407 Leaf Image Processing: Review

Authors: T. Vijayashree, A. Gopal

Abstract:

The aim of the work is to classify and authenticate medicinal plant materials and herbs widely used for Indian herbal medicinal preparation. The quality and authenticity of these raw materials are to be ensured for the preparation of herbal medicines. These raw materials are to be carefully screened, analyzed and documented due to mistaken of look-alike materials which do not have medicinal characteristics.

Keywords: authenticity, standardization, principal component analysis, imaging processing, signal processing

Procedia PDF Downloads 233
1406 Telemedicine for Telerehabilitation in Areas Affected by Social Conflicts in Colombia

Authors: Lilia Edit Aparicio Pico, Paulo Cesar Coronado Sánchez, Roberto Ferro Escobar

Abstract:

This paper presents the implementation of telemedicine services for physiotherapy, occupational therapy, and speech therapy rehabilitation, utilizing telebroadcasting of audiovisual content to enhance comprehensive patient recovery in rural areas of San Vicente del Caguán municipality, characterized by high levels of social conflict in Colombia. The region faces challenges such as dysfunctional problems, physical rehabilitation needs, and a high prevalence of hearing diseases, leading to neglect and substandard health services. Limited access to healthcare due to communication barriers and transportation difficulties exacerbates these issues. To address these challenges, a research initiative was undertaken to leverage information and communication technologies (ICTs) to improve healthcare quality and accessibility for this vulnerable population. The primary objective was to develop a tele-rehabilitation system to provide asynchronous online therapies and teleconsultation services for patient follow-up during the recovery process. The project comprises two components: Communication systems and human development. A technological component involving the establishment of a wireless network connecting rural centers and the development of a mobile application for video-based therapy delivery. Communications systems will be provided by a radio link that utilizes internet provided by the Colombian government, located in the municipality of San Vicente del Caguán to connect two rural centers (Pozos and Tres Esquinas) and a mobile application for managing videos for asynchronous broadcasting in sidewalks and patients' homes. This component constitutes an operational model integrating information and telecommunications technologies. The second component involves pedagogical and human development. The primary focus is on the patient, where performance indicators and the efficiency of therapy support were evaluated for the assessment and monitoring of telerehabilitation results in physical, occupational, and speech therapy. They wanted to implement a wireless network to ensure audiovisual content transmission for tele-rehabilitation, design audiovisual content for tele-rehabilitation based on services provided by the ESE Hospital San Rafael in physiotherapy, occupational therapy, and speech therapy, develop a software application for fixed and mobile devices enabling access to tele-rehabilitation audiovisual content for healthcare personnel and patients and finally to evaluate the technological solution's contribution to the ESE Hospital San Rafael community. The research comprised four phases: wireless network implementation, audiovisual content design, software application development, and evaluation of the technological solution's impact. Key findings include the successful implementation of virtual teletherapy, both synchronously and asynchronously, and the assessment of technological performance indicators, patient evolution, timeliness, acceptance, and service quality of tele-rehabilitation therapies. The study demonstrated improved service coverage, increased care supply, enhanced access to timely therapies for patients, and positive acceptance of teletherapy modalities. Additionally, the project generated new knowledge for potential replication in other regions and proposed strategies for short- and medium-term improvement of service quality and care indicators

Keywords: e-health, medical informatics, telemedicine, telerehabilitation, virtual therapy

Procedia PDF Downloads 38
1405 Comparing Russian and American Students’ Metaphorical Competence

Authors: Svetlana L. Mishlanova, Evgeniia V. Ermakova, Mariia E. Timirkina

Abstract:

The paper is concerned with the study of metaphor production in essays written by Russian and English native speakers in the framework of cognitive metaphor theory. It considers metaphorical competence as individual’s ability to recognize, understand and use metaphors in speech. The work analyzes the influence of visual metaphor on production and density of conventional and novel verbal metaphors. The main methods of research include experiment connected with image interpretation, metaphor identification procedure (MIPVU) and visual conventional metaphors identification procedure proposed by VisMet group. The research findings will be used in the project aimed at comparing metaphorical competence of native and non-native English speakers.

Keywords: metaphor, metaphorical competence, conventional, novel

Procedia PDF Downloads 272
1404 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission

Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov

Abstract:

The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.

Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit

Procedia PDF Downloads 579
1403 A High-Throughput Enzyme Screening Method Using Broadband Coherent Anti-stokes Raman Spectroscopy

Authors: Ruolan Zhang, Ryo Imai, Naoko Senda, Tomoyuki Sakai

Abstract:

Enzymes have attracted increasing attentions in industrial manufacturing for their applicability in catalyzing complex chemical reactions under mild conditions. Directed evolution has become a powerful approach to optimize enzymes and exploit their full potentials under the circumstance of insufficient structure-function knowledge. With the incorporation of cell-free synthetic biotechnology, rapid enzyme synthesis can be realized because no cloning procedure such as transfection is needed. Its open environment also enables direct enzyme measurement. These properties of cell-free biotechnology lead to excellent throughput of enzymes generation. However, the capabilities of current screening methods have limitations. Fluorescence-based assay needs applicable fluorescent label, and the reliability of acquired enzymatic activity is influenced by fluorescent label’s binding affinity and photostability. To acquire the natural activity of an enzyme, another method is to combine pre-screening step and high-performance liquid chromatography (HPLC) measurement. But its throughput is limited by necessary time investment. Hundreds of variants are selected from libraries, and their enzymatic activities are then identified one by one by HPLC. The turn-around-time is 30 minutes for one sample by HPLC, which limits the acquirable enzyme improvement within reasonable time. To achieve the real high-throughput enzyme screening, i.e., obtain reliable enzyme improvement within reasonable time, a widely applicable high-throughput measurement of enzymatic reactions is highly demanded. Here, a high-throughput screening method using broadband coherent anti-Stokes Raman spectroscopy (CARS) was proposed. CARS is one of coherent Raman spectroscopy, which can identify label-free chemical components specifically from their inherent molecular vibration. These characteristic vibrational signals are generated from different vibrational modes of chemical bonds. With the broadband CARS, chemicals in one sample can be identified from their signals in one broadband CARS spectrum. Moreover, it can magnify the signal levels to several orders of magnitude greater than spontaneous Raman systems, and therefore has the potential to evaluate chemical's concentration rapidly. As a demonstration of screening with CARS, alcohol dehydrogenase, which converts ethanol and nicotinamide adenine dinucleotide oxidized form (NAD+) to acetaldehyde and nicotinamide adenine dinucleotide reduced form (NADH), was used. The signal of NADH at 1660 cm⁻¹, which is generated from nicotinamide in NADH, was utilized to measure the concentration of it. The evaluation time for CARS signal of NADH was determined to be as short as 0.33 seconds while having a system sensitivity of 2.5 mM. The time course of alcohol dehydrogenase reaction was successfully measured from increasing signal intensity of NADH. This measurement result of CARS was consistent with the result of a conventional method, UV-Vis. CARS is expected to have application in high-throughput enzyme screening and realize more reliable enzyme improvement within reasonable time.

Keywords: Coherent Anti-Stokes Raman Spectroscopy, CARS, directed evolution, enzyme screening, Raman spectroscopy

Procedia PDF Downloads 133
1402 Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns

Authors: Sayor Ajfar Aaron, Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz

Abstract:

This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios.

Keywords: artificial intelligence, convolutional neural network, emotion recognition, vocal patterns

Procedia PDF Downloads 34
1401 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal

Abstract:

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Keywords: acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation

Procedia PDF Downloads 323
1400 Design and Development of Automatic Onion Harvester

Authors: P. Revathi, T. Mrunalini, K. Padma Priya, P. Ramya, R. Saranya

Abstract:

During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the 5 gestures will be detected when shown with their hands via a webcam which is placed for gesture detection. A personal computer is used for displaying the gestures and for running the code in the raspberry pi imager.

Keywords: onion harvesting, automatic pluging, camera, raspberry pi

Procedia PDF Downloads 187
1399 Differential Expression of GABA and Its Signaling Components in Ulcerative Colitis and Irritable Bowel Syndrome Pathogenesis

Authors: Surbhi Aggarwal, Jaishree Paul

Abstract:

Background: Role of GABA has been implicated in autoimmune diseases like multiple sclerosis, type1 diabetes and rheumatoid arthritis where they modulate the immune response but role in gut inflammation has not been defined. Ulcerative colitis (UC) and diarrhoeal predominant irritable bowel syndrome (IBS-D) both involve inflammation of gastrointestinal tract. UC is a chronic, relapsing and idiopathic inflammation of gut. IBS is a common functional gastrointestinal disorder characterised by abdominal pain, discomfort and alternating bowel habits. Mild inflammation is known to occur in IBS-D. Aim: Aim of this study was to investigate the role of GABA in UC as well as in IBS-D. Materials and methods: Blood and biopsy samples from UC, IBS-D and controls were collected. ELISA was used for measuring level of GABA in serum of UC, IBS-D and controls. RT-PCR analysis was done to determine GABAergic signal system in colon biopsy of UC, IBS-D and controls. RT-PCR was done to check the expression of proinflammatory cytokines. CurveExpert 1.4, Graphpad prism-6 software were used for data analysis. Statistical analysis was done by unpaired, two-way student`s t-test. All sets of data were represented as mean± SEM. A probability level of p < 0.05 was considered statistically significant. Results and conclusion: Significantly decreased level of GABA and altered GABAergic signal system was detected in UC and IBS-D as compared to controls. Significantly increased expression of proinflammatory cytokines was also determined in UC and IBS-D as compared to controls. Hence we conclude that insufficient level of GABA in UC and IBS-D leads to overproduction of proinflammatory cytokines which further contributes to inflammation. GABA may be used as a promising therapeutic target for treatment of gut inflammation or other inflammatory diseases.

Keywords: diarrheal predominant irritable bowel syndrome, γ-aminobutyric acid (GABA), inflammation, ulcerative colitis

Procedia PDF Downloads 217
1398 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea

Authors: K. S. Sreejith, C. Shaji

Abstract:

Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.

Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis

Procedia PDF Downloads 269
1397 Sexual Orientation, Household Labour Division and the Motherhood Wage Penalty

Authors: Julia Hoefer Martí

Abstract:

While research has consistently found a significant motherhood wage penalty for heterosexual women, where homosexual women are concerned, evidence has appeared to suggest no effect, or possibly even a wage bonus. This paper presents a model of the household with a public good that requires both a monetary expense and a labour investment, and where the household budget is shared between partners. Lower-wage partners will do relatively more of the household labour while higher-wage partners will specialise in market labour, and the arrival of a child exacerbates this split, resulting in the lower-wage partner taking on even more of the household labour in relative terms. Employers take this gender-sexuality dyad as a signal for employees’ commitment to the labour market after having a child, and use the information when setting wages after employees become parents. Given that women empirically earn lower wages than men, in a heterosexual couple the female partner will often do more of the household labour. However, as not every female partner has a lower wage, this results in an over-adjustment of wages that manifests as an unexplained motherhood wage penalty. On the other hand, in homosexual couples wage distributions are ex ante identical, and gender is no longer a useful signal to employers as to whether the partner is likely to specialise in household labour or market labour. This model is then tested using longitudinal data from the EU Standards of Income and Living Conditions (EU-SILC) to investigate the hypothesis that women experience different wage effects of motherhood depending on their sexual orientation. While heterosexual women receive a significant motherhood wage penalty of 8-10%, homosexual mothers do not receive any significant wage bonus or penalty of motherhood, consistent with the hypothesis presented above.

Keywords: discrimination, gender, motherhood, sexual orientation, labor economics

Procedia PDF Downloads 154
1396 Multidisciplinary Approach to Diagnosis of Primary Progressive Aphasia in a Younger Middle Aged Patient

Authors: Robert Krause

Abstract:

Primary progressive aphasia (PPA) is a neurodegenerative disease similar to frontotemporal and semantic dementia, while having a different clinical image and anatomic pathology topography. Nonetheless, they are often included under an umbrella term: frontotemporal lobar degeneration (FTLD). In the study, examples of diagnosing PPA are presented through the multidisciplinary lens of specialists from different fields (neurologists, psychiatrists, clinical speech therapists, clinical neuropsychologists and others) using a variety of diagnostic tools such as MR, PET/CT, genetic screening and neuropsychological and logopedic methods. Thanks to that, specialists can get a better and clearer understanding of PPA diagnosis. The study summarizes the concrete procedures and results of different specialists while diagnosing PPA in a patient of younger middle age and illustrates the importance of multidisciplinary approach to differential diagnosis of PPA.

Keywords: primary progressive aphasia, etiology, diagnosis, younger middle age

Procedia PDF Downloads 178
1395 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 87
1394 The Effect of the Vernacular on Code-Switching Hebrew into Palestinian Arabic

Authors: Ward Makhoul

Abstract:

Code-switching (CS) is known as a ubiquitous phenomenon in multilingual societies and countries. Vernacular Palestinian Arabic (PA) variety spoken in Israel is among these languages, informally used for day-to-day conversations only. Such conversations appear to contain code-switched instances from Hebrew, the formal and dominant language of the country, even in settings where the need for CS seems to be unnecessary. This study examines the CS practices in PA and investigates the reason behind these CS instances in controlled settings and the correlation between bilingual dominance and CS. In the production-task interviews and Bilingual Language Profile test (BLP), there was a correlation between language dominance and CS; 13 participants were interviewed to elicit and analyze natural speech-containing CS instances, along with undergoing a BLP test. The acceptability judgment task observed the limits and boundaries of different code-switched linguistic structures.

Keywords: code-switching, Hebrew, Palestinian-Arabic, vernacular

Procedia PDF Downloads 106
1393 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 337
1392 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 89
1391 Adaptation and Validation of Voice Handicap Index in Telugu Language

Authors: B. S. Premalatha, Kausalya Sahani

Abstract:

Background: Voice is multidimensional which convey emotion, feelings, and communication. Voice disorders have an adverse effect on the physical, emotional and functional domains of an individual. Self-rating by clients about their voice problem helps the clinicians to plan intervention strategies. Voice handicap index is one such self-rating scale contains 30 questions that quantify the functional, physical and emotional impacts of a voice disorder on a patient’s quality of life. Each subsection has 10 questions. Though adapted and validated versions of VHI are available in other Indian languages but not in Telugu, which is a Dravidian language native to India. It is mainly spoken in Andhra Pradesh and neighbouring states in southern India. Objectives: To adapt and validate the English version of Voice Handicap Index (VHI) into Telugu language and evaluate its internal consistency and clinical validate in Telugu speaking population. Materials: The study carried out in three stages. First stage was a forward translation of English version of VHI, was given to ten experts, who were well proficient in writing and reading Telugu and five speech-language pathologists to translate into Telugu. Second Stage was backward translation where translated version of Telugu was given to a different group of ten experts (who were well proficient in writing and reading Telugu) and five speech-language pathologists who were native Telugu speakers and had good proficiency in Telugu and English. The third stage was an administration of translated version on Telugu to the targeted population. Totally 40 clinical subjects and 40 normal controls served as participants, and each group had 26 males and 14 females’ age range of 20 to 60 years. Clinical group comprised of individuals with laryngectomee with the Tracheoesophageal puncture (n=18), laryngitis (n=11), vocal nodules (n=7) and vocal fold palsy (n=4). Participants were asked to mark of their each experience on a 5 point equal appearing scale (0=never, 1=almost never, 2=sometimes, 3=almost always, 4=always) with a maximum total score of 120. Results: Statistical analysis was made by using SPSS software (22.0.0 Version). Mean, standard deviation and percentage (%) were calculated all the participants for both the groups. Internal consistency of VHI in Telugu was found to be excellent with the consistency scores for all the domains such as physical, emotional and functional are 0.742, 0.934and 0.938. The validity of scores showed a significant difference between clinical population and control group for domains like physical, emotional and functional and total scores. P value found to be less than 0.001( < 0.001). Negative correlation found in age and gender among self-domains such as physical, emotional and functional total scores in dysphonic and control group. Conclusion: The present study indicated that VHI in Telugu is able to discriminate participants having voice pathology from normal populations, which make this as a valid tool to collect information about their voice from the participants.

Keywords: adaptation, Telugu Version, translation, Voice Handicap Index (VHI)

Procedia PDF Downloads 269
1390 Fluorescing Aptamer-Gold Nanoparticle Complex for the Sensitive Detection of Bisphenol A

Authors: Eunsong Lee, Gae Baik Kim, Young Pil Kim

Abstract:

Bisphenol A (BPA) is one of the endocrine disruptors (EDCs), which have been suspected to be associated with reproductive dysfunction and physiological abnormality in human. Since the BPA has been widely used to make plastics and epoxy resins, the leach of BPA from the lining of plastic products has been of major concern, due to its environmental or human exposure issues. The simple detection of BPA based on the self-assembly of aptamer-mediated gold nanoparticles (AuNPs) has been reported elsewhere, yet the detection sensitivity still remains challenging. Here we demonstrate an improved AuNP-based sensor of BPA by using fluorescence-combined AuNP colorimetry in order to overcome the drawback of traditional AuNP sensors. While the anti-BPA aptamer (full length or truncated ssDNA) triggered the self-assembly of unmodified AuNP (citrate-stabilized AuNP) in the presence of BPA at high salt concentrations, no fluorescence signal was observed by the subsequent addition of SYBR Green, due to a small amount of free anti-BPA aptamer. In contrast, the absence of BPA did not cause the self-assembly of AuNPs (no color change by salt-bridged surface stabilization) and high fluorescence signal by SYBP Green, which was due to a large amount of free anti-BPA aptamer. As a result, the quantitative analysis of BPA was achieved using the combination of absorption of AuNP with fluorescence intensity of SYBR green as a function of BPA concentration, which represented more improved detection sensitivity (as low as 1 ppb) than did in the AuNP colorimetric analysis. This method also enabled to detect high BPA in water-soluble extracts from thermal papers with high specificity against BPS and BPF. We suggest that this approach will be alternative for traditional AuNP colorimetric assays in the field of aptamer-based molecular diagnosis.

Keywords: bisphenol A, colorimetric, fluoroscence, gold-aptamer nanobiosensor

Procedia PDF Downloads 177
1389 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD

Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.

Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio

Procedia PDF Downloads 361
1388 Relationship and Comorbidity Between Down Syndrome and Autism Spectrum Disorder

Authors: Javiera Espinosa, Patricia López, Noelia Santos, Nadia Loro, Esther Moraleda

Abstract:

In recent years, there has been a notable increase in the number of investigations that establish that Down Syndrome and Autism Spectrum Disorder are diagnoses that can coexist together. However, there are also many studies that consider that both diagnoses present neuropsychological, linguistic and adaptive characteristics with a totally different profile. The objective of this research is to question whether there really can be a profile that encompasses both disorders or if they can be incompatible with each other. To this end, a review of the scientific literature of recent years has been carried out. The results indicate that the two lines collect opposite approaches. On the one hand, there is research that supports the increase in comorbidity between Down Syndrome and Autism Spectrum Disorder, and on the other hand, many investigations show a totally different general development profile between the two. The discussion focuses on discussing both lines of work and on proposing future lines of research in this regard.

Keywords: disability, language, speech, down syndrome

Procedia PDF Downloads 65
1387 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization

Authors: Subhajit Das, Nirjhar Dhang

Abstract:

Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.

Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization

Procedia PDF Downloads 206
1386 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva

Authors: Sevde Altuntas, Fatih Buyukserin

Abstract:

Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.

Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy

Procedia PDF Downloads 283
1385 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 153
1384 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 97
1383 Using Augmented Reality to Enhance Doctor Patient Communication

Authors: Rutusha Bhutada, Gaurav Chavan, Sarvesh Kasat, Varsha Mujumdar

Abstract:

This software system will be an Augmented Reality application designed to maximize the doctor’s productivity by providing tools to assist in automating the patient recognition and updating patient’s records using face and voice recognition features, which would otherwise have to be performed manually. By maximizing the doctor’s work efficiency and production, the application will meet the doctor’s needs while remaining easy to understand and use. More specifically, this application is designed to allow a doctor to manage his productive time in handling the patient without losing eye-contact with him and communicate with a group of other doctors for consultation, for in-place treatments through video streaming, as a video study. The system also contains a relational database containing a list of doctor, patient and display techniques.

Keywords: augmented reality, hand-held devices, head-mounted devices, marker based systems, speech recognition, face detection

Procedia PDF Downloads 427