Search results for: auditory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 201

Search results for: auditory

141 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 91
140 The Developmental Model of Teaching and Learning Clinical Practicum at Postpartum Ward for Nursing Students by Using VARK Learning Styles

Authors: Wanwadee Neamsakul

Abstract:

VARK learning style is an effective method of learning that could enhance all skills of the students like visual (V), auditory (A), read/write (R), and kinesthetic (K). This learning style benefits the students in terms of professional competencies, critical thinking and lifelong learning which are the desirable characteristics of the nursing students. This study aimed to develop a model of teaching and learning clinical practicum at postpartum ward for nursing students by using VARK learning styles, and evaluate the nursing students’ opinions about the developmental model. A methodology used for this study was research and development (R&D). The model was developed by focus group discussion with five obstetric nursing instructors who have experiences teaching Maternal Newborn and Midwifery I subject. The activities related to practices in the postpartum (PP) ward including all skills of VARK were assigned into the matrix table. The researcher asked the experts to supervise the model and adjusted the model following the supervision. Subsequently, it was brought to be tried out with the nursing students who practiced on the PP ward. Thirty third year nursing students from one of the northern Nursing Colleges, Academic year 2015 were purposive sampling. The opinions about the satisfaction of the model were collected using a questionnaire which was tested for its validity and reliability. Data were analyzed using descriptive statistics. The developed model composed of 27 activities. Seven activities were developed as enhancement of visual skills for the nursing students (25.93%), five activities as auditory skills (18.52%), six activities as read and write skills (22.22%), and nine activities as kinesthetic skills (33.33%). Overall opinions about the model were reported at the highest level of average satisfaction (mean=4.63, S.D=0.45). In the aspects of visual skill (mean=4.80, S.D=0.45) was reported at the highest level of average satisfaction followed by auditory skill (mean=4.62, S.D=0.43), read and write skill (mean=4.57, S.D=0.46), and kinesthetic skill (mean=4.53, S.D=0.45) which were reported at the highest level of average satisfaction, respectively. The nursing students reported that the model could help them employ all of their skills during practicing and taking care of the postpartum women and newborn babies. They could establish self-confidence while providing care and felt proud of themselves by the benefits of the model. It can be said that using VARK learning style to develop the model could enhance both nursing students’ competencies and positive attitude towards the nursing profession. Consequently, they could provide quality care for postpartum women and newborn babies effectively in the long run.

Keywords: model, nursing students, postpartum ward, teaching and learning clinical practicum

Procedia PDF Downloads 150
139 The Acquisition of /r/ By Setswana-Learning Children

Authors: Keneilwe Matlhaku

Abstract:

Crosslinguistic studies (theoretical and clinical) have shown delays and significant misarticulation in the acquisition of the rhotics. This article provides a detailed analysis of the early development of the rhotic phoneme, an apical trill /r/, by monolingual Setswana (Tswana S30) children of age ranges between 1 and 4 years. The data display the following trends: (1) late acquisition of /r/; (2) a wide range of substitution patterns involving this phoneme (i.e., gliding, coronal stopping, affrication, deletion, lateralization, as well as, substitution to a dental and uvular fricative). The primary focus of the article is on the potential origins of these variations of /r/, even within the same language. Our data comprises naturalistic longitudinal audio recordings of 6 children (2 males and 4 females) whose speech was recorded in their homes over a period of 4 months with no or only minimal disruptions in their daily environments. Phon software (Rose et al. 2013; Rose & MacWhinney 2014) was used to carry out the orthographic and phonetic transcriptions of the children’s data. Phon also enabled the generation of the children’s phonological inventories for comparison with adult target IPA forms. We explain the children’s patterns through current models of phonological emergence (MacWhinney 2015) as well as McAllister Byun, Inkelas & Rose (2016); Rose et al., (2022), which highlight the perceptual and articulatory factors influencing the development of sounds and sound classes. We highlight how the substitution patterns observed in the data can be captured through a consideration of the auditory properties of the target speech sounds, combined with an understanding of the types of articulatory gestures involved in the production of these sounds. These considerations, in turn, highlight some of the most central aspects of the challenges faced by the child toward learning these auditory-articulatory mappings. We provide a cross-linguistic survey of the acquisition of rhotic consonants in a sample of related and unrelated languages in which we show that the variability and volatility in the substitution patterns of /r/ is also brought about by the properties of the children’s ambient languages. Beyond theoretical issues, this article sets an initial foundation for developing speech-language pathology materials and services for Setswana learning children, an emerging area of public service in Botswana.

Keywords: rhotic, apical trill, Phon, phonological emergence, auditory, articulatory, mapping

Procedia PDF Downloads 38
138 Sound Selection for Gesture Sonification and Manipulation of Virtual Objects

Authors: Benjamin Bressolette, S´ebastien Denjean, Vincent Roussarie, Mitsuko Aramaki, Sølvi Ystad, Richard Kronland-Martinet

Abstract:

New sensors and technologies – such as microphones, touchscreens or infrared sensors – are currently making their appearance in the automotive sector, introducing new kinds of Human-Machine Interfaces (HMIs). The interactions with such tools might be cognitively expensive, thus unsuitable for driving tasks. It could for instance be dangerous to use touchscreens with a visual feedback while driving, as it distracts the driver’s visual attention away from the road. Furthermore, new technologies in car cockpits modify the interactions of the users with the central system. In particular, touchscreens are preferred to arrays of buttons for space improvement and design purposes. However, the buttons’ tactile feedback is no more available to the driver, which makes such interfaces more difficult to manipulate while driving. Gestures combined with an auditory feedback might therefore constitute an interesting alternative to interact with the HMI. Indeed, gestures can be performed without vision, which means that the driver’s visual attention can be totally dedicated to the driving task. In fact, the auditory feedback can both inform the driver with respect to the task performed on the interface and on the performed gesture, which might constitute a possible solution to the lack of tactile information. As audition is a relatively unused sense in automotive contexts, gesture sonification can contribute to reducing the cognitive load thanks to the proposed multisensory exploitation. Our approach consists in using a virtual object (VO) to sonify the consequences of the gesture rather than the gesture itself. This approach is motivated by an ecological point of view: Gestures do not make sound, but their consequences do. In this experiment, the aim was to identify efficient sound strategies, to transmit dynamic information of VOs to users through sound. The swipe gesture was chosen for this purpose, as it is commonly used in current and new interfaces. We chose two VO parameters to sonify, the hand-VO distance and the VO velocity. Two kinds of sound parameters can be chosen to sonify the VO behavior: Spectral or temporal parameters. Pitch and brightness were tested as spectral parameters, and amplitude modulation as a temporal parameter. Performances showed a positive effect of sound compared to a no-sound situation, revealing the usefulness of sounds to accomplish the task.

Keywords: auditory feedback, gesture sonification, sound perception, virtual object

Procedia PDF Downloads 302
137 Relevance of Brain Stem Evoked Potential in Diagnosis of Central Demyelination in Guillain Barre’ Syndrome

Authors: Geetanjali Sharma

Abstract:

Guillain Barre’ syndrome (GBS) is an auto-immune mediated demyelination poly-radiculo-neuropathy. Clinical features include progressive symmetrical ascending muscle weakness of more than two limbs, areflexia with or without sensory, autonomic and brainstem abnormalities, the purpose of this study was to determine subclinical neurological changes of CNS with GBS and to establish the presence of central demyelination in GBS. The study was prospective and conducted in the Department of Physiology, Pt. B. D. Sharma Post-graduate Institute of Medical Sciences, University of Health Sciences, Rohtak, Haryana, India to find out early central demyelination in clinically diagnosed patients of GBS. These patients were referred from the department of Medicine of our Institute to our department for electro-diagnostic evaluation. The study group comprised of 40 subjects (20 clinically diagnosed GBS patients and 20 healthy individuals as controls) aged between 6-65 years. Brain Stem evoked Potential (BAEP) were done in both groups using RMS EMG EP mark II machine. BAEP parameters included the latencies of waves I to IV, inter peak latencies I-III, III-IV & I-V. Statistically significant increase in absolute peak and inter peak latencies in the GBS group as compared with control group was noted. Results of evoked potential reflect impairment of auditory pathways probably due to focal demyelination in Schwann cell derived myelin sheaths that cover the extramedullary portion of auditory nerves. Early detection of the sub-clinical abnormalities is important as timely intervention reduces morbidity.

Keywords: brainstem, demyelination, evoked potential, Guillain Barre’

Procedia PDF Downloads 302
136 The Role of Bone Marrow Stem Cells Transplantation in the Repair of Damaged Inner Ear in Albino Rats

Authors: Ahmed Gaber Abdel Raheem, Nashwa Ahmed Mohamed

Abstract:

Introduction: Sensorineural hearing loss (SNHL) is largely caused by the degeneration of the cochlea. Therapeutic options for SNHL are limited to hearing aids and cochlear implants. The cell transplantation approach to the regeneration of hair cells has gained considerable attention because stem cells are believed to accumulate in the damaged sites and have the potential for the repair of damaged tissues. The aim of the work: was to assess the use of bone marrow transplantation in repair of damaged inner ear hair cells in rats after the damage had been inflicted by Amikacin injection. Material and Methods: Thirty albino rats were used in this study. They were divided into three groups. Each group ten rats. Group I: used as control. Group II: Were given Amikacin- intratympanic injection till complete loss of hearing function. This could be assessed by Distortion product Otoacoustic Emission (DPOAEs) and / or auditory brain stem evoked potential (ABR). GroupIII: were given intra-peritoneal injection of bone marrow stem cell after complete loss of hearing caused by Amikacin. Clinical assessment was done using DPOAEs and / or auditory brain stem evoked potential (ABR), before and after bone marrow injection. Histological assessment of the inner ear was done by light and electron microscope. Also, Detection of stem cells in the inner ear by immunohistochemistry. Results: Histological examination of the specimens showed promising improvement in the structure of cochlea that may be responsible for the improvement of hearing function in rats detected by DPOAEs and / or ABR. Conclusion: Bone marrow stem cells transplantation might be useful for the treatment of SNHL.

Keywords: amikacin, hair cells, sensorineural hearing loss, stem cells

Procedia PDF Downloads 449
135 Different Motor Inhibition Processes in Action Selection Stage: A Study with Spatial Stroop Paradigm

Authors: German Galvez-Garcia, Javier Albayay, Javiera Peña, Marta Lavin, George A. Michael

Abstract:

The aim of this research was to investigate whether the selection of the actions needs different inhibition processes during the response selection stage. In Experiment 1, we compared the magnitude of the Spatial Stroop effect, which occurs in response selection stage, in two motor actions (lifting vs reaching) when the participants performed both actions in the same block or in different blocks (mixed block vs. pure blocks).Within pure blocks, we obtained faster latencies when lifting actions were performed, but no differences in the magnitude of the Spatial Stroop effect were observed. Within mixed block, we obtained faster latencies as well as bigger-magnitude for Spatial Stroop effect when reaching actions were performed. We concluded that when no action selection is required (the pure blocks condition), inhibition works as a unitary system, whereas in the mixed block condition, where action selection is required, different inhibitory processes take place within a common processing stage. In Experiment 2, we investigated this common processing stage in depth by limiting participants’ available resources, requiring them to engage in a concurrent auditory task within a mixed block condition. The Spatial Stroop effect interacted with Movement as it did in Experiment 1, but it did not significantly interact with available resources (Auditory task x Spatial Stroop effect x Movement interaction). Thus, we concluded that available resources are distributed equally to both inhibition processes; this reinforces the likelihood of there being a common processing stage in which the different inhibitory processes take place.

Keywords: inhibition process, motor processes, selective inhibition, dual task

Procedia PDF Downloads 392
134 Promoting Academic and Social-Emotional Growth of Students with Learning Differences Through Differentiated Instruction

Authors: Jolanta Jonak

Abstract:

Traditional classrooms are challenging for many students, but especially for students that learn differently due to cognitive makeup, learning preferences, or disability. These students often require different teaching approaches and learning opportunities to benefit from learning. Teachers frequently divert to using one teaching approach, the one that matches their own learning style. For instance, teachers that are auditory learners, likely default to providing auditory learning opportunities. However, if a student is a visual learner, he/she may not fully benefit from that teaching style. Based on research, students and their parents’ feedback, large numbers of students are not provided the type of education and types of supports they need in order to be successful in an academic environment. This eventually leads to not learning at an appropriate rate and ultimately leading to skill deficiencies and deficits. Providing varied learning approaches promote high academic and social-emotional growth of all students and it will prevent inaccurate Special Education referrals. Varied learning opportunities can be delivered for all students by providing Differentiated Instruction (DI). This type of instruction allows each student to learn in the most optimal way regardless of learning preferences and cognitive learning profiles. Using Differentiated Instruction will lead to a high level of student engagement and learning. In addition, experiencing success in the classroom, will contribute to increased social emotional wellbeing. Being cognizant of how teaching approaches impact student's learning, school staff can avoid inaccurate perceptions about the students’ learning abilities, unnecessary referrals for special education evaluations, and inaccurate decisions about the presence of a disability. This presentation will illustrate learning differences due to various factors, how to recognize them, and how to address them through Differentiated Instruction.

Keywords: special education, disability, differences, differentiated instruction, social emotional wellbeing

Procedia PDF Downloads 49
133 Translation of the Verbal Nouns (Masadars) Originating from Three-Letter Verbs in the Holy Quran: Verbal Noun with More than One Pattern (Wazn) As a Model

Authors: Montasser Mohamed Abdelwahab Mahmoud, Abdelwahab Saber Esawi

Abstract:

The language of the Qur’an has a wide range of understanding, reflection, and meanings. Therefore, translation of the Qur’an is inevitably nothing but a translation of the interpretation of the meanings of the Qur’an. It requires special competencies and skills for translators so that they can get close to the intended meaning of the verse of the Qur’an and convey it with precision. In the Arabic language, the verbal noun “AlMasdar” is a very important derivative that properly expresses the verbal idea in the form of a noun. It sounds the same as the base form of the verb with minor changes in the vowel pattern. It is one of the important topics in morphology. The morphologists divided verbal nouns into auditory and analogical, and they stated that that the verbal nouns (Masadars) originating from three-letter verbs are auditory, although they set controls for some of them in order to preserve them. As for the lexicographers, they mentioned the verbal nouns while talking about the lexical materials, and in some cases, their explanation of them exceeded that made by the morphologists, especially in their discussion of structures that the morphologists did not refer to in their books. The verb kafara (disbelief), for example, has three patterns, namely: al-kufْr, al-kufrān, and al-kufūr, and it was mentioned in the Holy Qur’an with different connotations. The verb ṣāma (fasted) with his two patterns (al-ṣaūm and al-ṣīām) was mentioned in the Holy Qur’an while their semantic meaning is different. The problem discussed in this research paper lied in the "linguistic loss" committed by translators when dealing with Islamic religious texts, especially the Qur'an. The study tried to identify the strategy adopted by translators of the Holy Qur'an in translating words that were classified as verbal nouns through analyzing the translation rendered by five translations of the Qur’an into English: Yusuf Ali, Pickthall, Mohsin Khan, Muhammad Sarwar, and Shakir. This study was limited to the verbal nouns in the Quraan that originate from three-letter verbs and have different semantic meanings.

Keywords: pattern, three-letter verbs, translation of the Quran, verbal nouns

Procedia PDF Downloads 161
132 An Event-Related Potential Investigation of Speech-in-Noise Recognition in Native and Nonnative Speakers of English

Authors: Zahra Fotovatnia, Jeffery A. Jones, Alexandra Gottardo

Abstract:

Speech communication often occurs in environments where noise conceals part of a message. Listeners should compensate for the lack of auditory information by picking up distinct acoustic cues and using semantic and sentential context to recreate the speaker’s intended message. This situation seems to be more challenging in a nonnative than native language. On the other hand, early bilinguals are expected to show an advantage over the late bilingual and monolingual speakers of a language due to their better executive functioning components. In this study, English monolingual speakers were compared with early and late nonnative speakers of English to understand speech in noise processing (SIN) and the underlying neurobiological features of this phenomenon. Auditory mismatch negativities (MMNs) were recorded using a double-oddball paradigm in response to a minimal pair that differed in their middle vowel (beat/bit) at Wilfrid Laurier University in Ontario, Canada. The results did not show any significant structural and electroneural differences across groups. However, vocabulary knowledge correlated positively with performance on tests that measured SIN processing in participants who learned English after age 6. Moreover, their performance on the test negatively correlated with the integral area amplitudes in the left superior temporal gyrus (STG). In addition, the STG was engaged before the inferior frontal gyrus (IFG) in noise-free and low-noise test conditions in all groups. We infer that the pre-attentive processing of words engages temporal lobes earlier than the fronto-central areas and that vocabulary knowledge helps the nonnative perception of degraded speech.

Keywords: degraded speech perception, event-related brain potentials, mismatch negativities, brain regions

Procedia PDF Downloads 107
131 The Phonemic Inventory of Tenyidie Affricates: An Acoustic Study

Authors: NeisaKuonuo Tungoe

Abstract:

Tenyidie, also known as Angami, is spoken by the Angami tribe of Nagaland, North-East India, bordering Myanmar (Burma). It belongs to the Tibeto-Burman language group, falling under the Kuki-Chin-Naga sub-family. Tenyidie studies have seen random attempts at explaining the phonemic inventory of Tenyidie. Different scholars have variously emphasized the grammar or the history of Tenyidie. Many of these claims have been stimulating, but they were often based on a small amount of merely suggestive data or on auditory perception only. The principal objective of this paper is to analyse the affricate segments of Tenyidie as an acoustic study. There are seven categories to the inventory of Tenyidie; Plosives, Nasals, Affricates, Laterals, Rhotics, Fricatives, Semi vowels and Vowels. In all, there are sixty phonemes in the inventory. As mentioned above, the only prominent readings on Tenyidie or affricates in particular are only reflected through auditory perception. As noted above, this study aims to lay out the affricate segments based only on acoustic conclusions. There are seven affricates found in Tenyidie. They are: 1) Voiceless Labiodental Affricate - / pf /, 2) Voiceless Aspirated Labiodental Affricate- / pfh /, 3) Voiceless Alveolar Affricate - / ts /, 4) Voiceless Aspirated Alveolar Affricate - / tsh /, 5) Voiced Alveolar Affricate - / dz /, 6) Voiceless Post-Alveolar Affricate / tʃ / and 7) Voiced Post- Alveolar Affricate- / dʒ /. Since the study is based on acoustic features of affricates, five informants were asked to record their voice with Tenyidie phonemes and English phonemes. Throughout the study of the recorded data, PRAAT, a scientific software program that has made itself indispensible for the analyses of speech in phonetics, have been used as the main software. This data was then used as a comparative study between Tenyidie and English affricates. Comparisons have also been drawn between this study and the work of another author who has stated that there are only six affricates in Tenyidie. The study has been quite detailed regarding the specifics of the data. Detailed accounts of the duration and acoustic cues have been noted. The data will be presented in the form of spectrograms. Since there aren’t any other acoustic related data done on Tenyidie, this study will be the first in the long line of acoustic researches on Tenyidie.

Keywords: tenyidie, affricates, praat, phonemic inventory

Procedia PDF Downloads 416
130 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 127
129 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder

Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada

Abstract:

From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.

Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation

Procedia PDF Downloads 188
128 Prevalence and Patterns of Hearing Loss among the Elderly with Hypertension in Southwest, Nigeria

Authors: Ayo Osisanya, Promise Ebuka Okonkwo

Abstract:

Reduced hearing sensitivity among the elderly has been attributed to some risk factors and influence of age-related degenerative conditions such as diabetes, cardiovascular disease, Alzheimer’s disease, bipolar disorder, and hypertension. Hearing loss; especially the age-related type (presbycusis), has been reported as one of the global burden affecting the general well-being and quality of life of the elderly with hypertension. Thus, hearing loss has been observed to be associated with hypertension and functional decline in elderly, as this condition makes them experience poor communication, fatigue, reduced social functions, mood-swing, and withdrawal syndrome. Emerging research outcomes indicate a strong relationship between hypertension and reduced auditory performance among the elderly. Therefore, this study determined the prevalence, types, and patterns of hearing loss associated with hypertension, with a bid to suggesting comprehensive management strategies and a model of creating awareness towards promoting good healthy living among the elderly in Nigeria. One hundred and seventy-two elderly, aged 65–85 with hypertension were purposively selected from patients undergoing treatment for hypertension in some tertiary hospitals in southwest Nigeria for the study. Participants were suggested to Pure-Tone Audiometry (PTA) through the use of Maico 53 Diagnostic Audiometer to determine the degree, types ad patterns of hearing loss among the elderly with hypertension. Results showed that 148 (86.05%) elderly with hypertension presented with different degrees, types, and patterns of hearing loss. Out of this number, 123 (83.11%) presented with bilateral hearing loss, while 25 (16.89%) had unilateral hearing loss. Degree of hearing loss, 74 moderate hearing loss, 118 moderately severe and 50 severe hearing loss. 36% of the hearing loss appeared as flat audiometric configuration, 24% were slopping, 19% were rising, while 21% were tough-shaped audiometric configurations. The findings showed high prevalence of hearing loss among the elderly with hypertension in Southwest, Nigeria. Based on the findings, management of elderly with hypertension should include regular audiological rehabilitation and total adherence to hearing conservation principles, otological management, regulation of blood pressure and adequate counselling / follow-up services.

Keywords: auditory performance, elderly, hearing loss, hypertension

Procedia PDF Downloads 300
127 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds

Authors: Zeina Merabi, Arij Dao

Abstract:

The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.

Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration

Procedia PDF Downloads 65
126 The Effect of the Base Computer Method on Repetitive Behaviors and Communication Skills

Authors: Hoorieh Darvishi, Rezaei

Abstract:

Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.

Keywords: autism, communication_skills, repetitive_behaviors, sensory_integration

Procedia PDF Downloads 9
125 The Effect of the Base Computer Method on Repetitive Behaviors and Communication Skills

Authors: Hoorieh Darvishi, Rezaei

Abstract:

Introduction: This study investigates the efficacy of computer-based interventions for children with Autism Spectrum Disorder , specifically targeting communication deficits and repetitive behaviors. The research evaluates novel software applications designed to enhance narrative capabilities and sensory integration through structured, progressive intervention protocols Method: The study evaluated two intervention software programs designed for children with autism, focusing on narrative speech and sensory integration. Twelve children aged 5-11 participated in the two-month intervention, attending three 45-minute weekly sessions, with pre- and post-tests measuring speech, communication, and behavioral outcomes. The narrative speech software incorporated 14 stories using the Cohen model. It progressively reduced software assistance as children improved their storytelling abilities, ultimately enabling independent narration. The process involved story comprehension questions and guided story completion exercises. The sensory integration software featured approximately 100 exercises progressing from basic classification to complex cognitive tasks. The program included attention exercises, auditory memory training (advancing from single to four-syllable words), problem-solving, decision-making, reasoning, working memory, and emotion recognition activities. Each module was accompanied by frequency and pitch-adjusted music that child enjoys it to enhance learning through multiple sensory channels (visual, auditory, and tactile). Conclusion: The results indicated that the use of these software programs significantly improved communication and narrative speech scores in children, while also reducing scores related to repetitive behaviors. Findings: These findings highlight the positive impact of computer-based interventions on enhancing communication skills and reducing repetitive behaviors in children with autism.

Keywords: autism, narrative speech, persian, SI, repetitive behaviors, communication

Procedia PDF Downloads 10
124 Legal Arrangement on Media Ownership and the Case of Turkey

Authors: Sevil Yildiz

Abstract:

In this study, we will touch upon the legal arrangements issued in Turkey for prevention of condensation and for ensuring pluralism in the media. We will mention the legal arrangements concerning the regulatory and supervisory authority, namely the Radio and Television Supreme Council, for the visual and auditory media. In this context; the legal arrangements, which have been introduced by the Law No 6112 on the Establishment of Radio and Television Enterprises and Their Media Services in relation to the media ownership, will be reviewed through comparison with the Article 29 of the repealed Law No 3984.

Keywords: media ownership, legal arrangements, the case for Turkey, pluralism

Procedia PDF Downloads 509
123 Development and Clinical Application of a Cochlear Implant Mapping Assistance System

Authors: Hong Mengdi, Li Jianan, Ji Fei, Chen Aiting, Wang Qian

Abstract:

Objective: To overcome the communication barriers that audiologists encounter during cochlear implant mapping, particularly the challenge of eliciting subjective feedback from recipients regarding electrical stimulation, and to enhance the capabilities of existing technologies, we teamed up with software engineers to design an interactive approach for patient-audiologist communication. This approach employs a tablet (PAD) as the interface for a communication and feedback system between patients and audiologists during the mapping process, known as the Cochlear Implant Mapping Assistance System. Methods: Capitalizing on the touchscreen functionality of the PAD, the recipients' subjective feedback during cochlear implant mapping is instantly transmitted to the audiologist's mapping computer. The system acts as a platform for auditory assessment instruments, facilitating immediate evaluation of recipients' post-mapping hearing and speech discrimination capabilities. Furthermore, the system is designed to augment the visual reinforcement audiometry (VRA) process. The system consists of six modules, including three testing projects: loudness testing, hearing threshold testing, and loudness balance testing; two assessment projects: warble tone testing and digit speech testing; and one VRA animation project. It also incorporates speech-to-text and text input display functions tailored to accommodate speech communication difficulties in hearing-impaired individuals, with pre-installed common exchange content between audiologists and recipients. Audiologists can input sentences by selecting options. The system supports switching between Chinese and English versions, suitable for audiologists and recipients who use English, facilitating international application of the system. Results: The Cochlear Implant Mapping Assistance System has been in use for over a year in the Auditory Implant Center of the Department of Otology and Neurotology, Medical Center of Otology and Head & Neck Surgery, Chinese PLA General Hospital, with more than 300 recipients using this mapping system. Currently, the system operates stably, with both audiologists and recipients providing positive feedback, indicating a significant improvement over previous methods. It is particularly well-received by pediatric recipients, significantly enhancing the work efficiency of audiologists and improving the feedback efficiency and accuracy of recipients. The system enhances the comprehensibility for cochlear implant recipients, improves wearing comfort and user experience, facilitates cochlear implant auditory mapping, and increases the collection of previously challenging-to-obtain data during the existing assisted mapping process, such as loudness testing data, electrical stimulation testing data, warble tone testing data, loudness balance testing data, digit speech testing data, and visual reinforcement audiometry testing data. Real-time data recording improves the accuracy of assisted mapping. The interface design is meticulously crafted to accommodate patients of varying ages and cognitive abilities, featuring an intuitive design that allows for effortless, guidance-free use by patients.

Keywords: audiologist, subjective feedback, mapping, cochlear implant

Procedia PDF Downloads 20
122 Effect of Classroom Acoustic Factors on Language and Cognition in Bilinguals and Children with Mild to Moderate Hearing Loss

Authors: Douglas MacCutcheon, Florian Pausch, Robert Ljung, Lorna Halliday, Stuart Rosen

Abstract:

Contemporary classrooms are increasingly inclusive of children with mild to moderate disabilities and children from different language backgrounds (bilinguals, multilinguals), but classroom environments and standards have not yet been adapted adequately to meet these challenges brought about by this inclusivity. Additionally, classrooms are becoming noisier as a learner-centered as opposed to teacher-centered teaching paradigm is adopted, which prioritizes group work and peer-to-peer learning. Challenging listening conditions with distracting sound sources and background noise are known to have potentially negative effects on children, particularly those that are prone to struggle with speech perception in noise. Therefore, this research investigates two groups vulnerable to these environmental effects, namely children with a mild to moderate hearing loss (MMHLs) and sequential bilinguals learning in their second language. In the MMHL study, this group was assessed on speech-in-noise perception, and a number of receptive language and cognitive measures (auditory working memory, auditory attention) and correlations were evaluated. Speech reception thresholds were found to be predictive of language and cognitive ability, and the nature of correlations is discussed. In the bilinguals study, sequential bilingual children’s listening comprehension, speech-in-noise perception, listening effort and release from masking was evaluated under a number of different ecologically valid acoustic scenarios in order to pinpoint the extent of the ‘native language benefit’ for Swedish children learning in English, their second language. Scene manipulations included target-to-distractor ratios and introducing spatially separated noise. This research will contribute to the body of findings from which educational institutions can draw when designing or adapting educational environments in inclusive schools.

Keywords: sequential bilinguals, classroom acoustics, mild to moderate hearing loss, speech-in-noise, release from masking

Procedia PDF Downloads 324
121 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 289
120 Tick Induced Facial Nerve Paresis: A Narrative Review

Authors: Jemma Porrett

Abstract:

Background: We present a literature review examining the research surrounding tick paralysis resulting in facial nerve palsy. A case of an intra-aural paralysis tick bite resulting in unilateral facial nerve palsy is also discussed. Methods: A novel case of otoacariasis with associated ipsilateral facial nerve involvement is presented. Additionally, we conducted a review of the literature, and we searched the MEDLINE and EMBASE databases for relevant literature published between 1915 and 2020. Utilising the following keywords; 'Ixodes', 'Facial paralysis', 'Tick bite', and 'Australia', 18 articles were deemed relevant to this study. Results: Eighteen articles included in the review comprised a total of 48 patients. Patients' ages ranged from one year to 84 years of age. Ten studies estimated the possible duration between a tick bite and facial nerve palsy, averaging 8.9 days. Forty-one patients presented with a single tick within the external auditory canal, three had a single tick located on the temple or forehead region, three had post-auricular ticks, and one patient had a remarkable 44 ticks removed from the face, scalp, neck, back, and limbs. A complete ipsilateral facial nerve palsy was present in 45 patients, notably, in 16 patients, this occurred following tick removal. House-Brackmann classification was utilised in 7 patients; four patients with grade 4, one patient with grade three, and two patients with grade 2 facial nerve palsy. Thirty-eight patients had complete recovery of facial palsy. Thirteen studies were analysed for time to recovery, with an average time of 19 days. Six patients had partial recovery at the time of follow-up. One article reported improvement in facial nerve palsy at 24 hours, but no further follow-up was reported. One patient was lost to follow up, and one article failed to mention any resolution of facial nerve palsy. One patient died from respiratory arrest following generalized paralysis. Conclusions: Tick paralysis is a severe but preventable disease. Careful examination of the face, scalp, and external auditory canal should be conducted in patients presenting with otalgia and facial nerve palsy, particularly in tropical areas, to exclude the possibility of tick infestation.

Keywords: facial nerve palsy, tick bite, intra-aural, Australia

Procedia PDF Downloads 113
119 Gamification as a Tool for Influencing Customers' Behaviour

Authors: Beata Zatwarnicka-Madura

Abstract:

The objective of the article was to identify the impacts of gamification on customers' behaviour. The most important applications of games in marketing and mechanisms of gamification are presented in the article. A detailed analysis of the influence of gamification on customers using two brands, Foursquare and Nike, was also presented. Research studies using auditory survey methods were carried out among 176 young respondents, who are potential targets of gamification. The studies confirmed a huge participation of young people in customer loyalty programs with relatively low participation in other gamification-based marketing activities. The research findings clearly indicate that gamification mechanisms are the most attractive.

Keywords: customer loyalty, games, gamification, social aspects

Procedia PDF Downloads 490
118 An Assessment of the Digital Transformation of Radio

Authors: Fatih Sogut

Abstract:

Developments in information technologies have caused significant changes in terms of radio and television broadcasting. With these changes in terms of production format, transmission techniques and service delivery, the distinction between traditional media and New Media has emerged. The viewer/listener, who was in a passive position before, is now in an active position and has a say in many matters, including content production. Visual and auditory data transfer has diversified and become easier thanks to the convergence phenomenon. These transformations and developments also affected one of the oldest electronic communication tools, radio. In this study, in order to adapt to the new era that emerged with the digital age, the change in radio broadcasting and the factors that led to this change were tried to be explained.

Keywords: Internet, radio broadcasting, digital transformation, Internet broadcasting

Procedia PDF Downloads 170
117 A Cognitive Training Program in Learning Disability: A Program Evaluation and Follow-Up Study

Authors: Krisztina Bohacs, Klaudia Markus

Abstract:

To author’s best knowledge we are in absence of studies on cognitive program evaluation and we are certainly short of programs that prove to have high effect sizes with strong retention results. The purpose of our study was to investigate the effectiveness of a comprehensive cognitive training program, namely BrainRx. This cognitive rehabilitation program target and remediate seven core cognitive skills and related systems of sub-skills through repeated engagement in game-like mental procedures delivered one-on-one by a clinician, supplemented by digital training. A larger sample of children with learning disability were given pretest and post-test cognitive assessments. The experimental group completed a twenty-week cognitive training program in a BrainRx center. A matched control group received another twenty-week intervention with Feuerstein’s Instrumental Enrichment programs. A second matched control group did not receive training. As for pre- and post-test, we used a general intelligence test to assess IQ and a computer-based test battery for assessing cognition across the lifespan. Multiple regression analyses indicated that the experimental BrainRx treatment group had statistically significant higher outcomes in attention, working memory, processing speed, logic and reasoning, auditory processing, visual processing and long-term memory compared to the non-treatment control group with very large effect sizes. With the exception of logic and reasoning, the BrainRx treatment group realized significantly greater gains in six of the above given seven cognitive measures compared to the Feuerstein control group. Our one-year retention measures showed that all the cognitive training gains were above ninety percent with the greatest retention skills in visual processing, auditory processing, logic, and reasoning. The BrainRx program may be an effective tool to establish long-term cognitive changes in case of students with learning disabilities. Recommendations are made for treatment centers and special education institutions on the cognitive training of students with special needs. The importance of our study is that targeted, systematic, progressively loaded and intensive brain training approach may significantly change learning disabilities.

Keywords: cognitive rehabilitation training, cognitive skills, learning disability, permanent structural cognitive changes

Procedia PDF Downloads 202
116 Neurocognitive and Executive Function in Cocaine Addicted Females

Authors: Gwendolyn Royal-Smith

Abstract:

Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.

Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function

Procedia PDF Downloads 402
115 The Influence of Music Education and the Order of Sounds on the Grouping of Sounds into Sequences of Six Tones

Authors: Adam Rosiński

Abstract:

This paper discusses an experiment conducted with two groups of participants, composed of musicians and non-musicians, in order to investigate the impact of the speed of a sound sequence and the order of sounds on the grouping of sounds into sequences of six tones. Significant differences were observed between musicians and non-musicians with respect to the threshold sequence speed at which the sequence was split into two streams. The differences in the results for the two groups suggest that the musical education of the participating listeners may be a vital factor. The criterion of musical education should be taken into account during experiments so that the results obtained are reliable, uniform, and free from interpretive errors.

Keywords: auditory scene analysis, education, hearing, psychoacoustics

Procedia PDF Downloads 102
114 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 89
113 Amniotic Fluid Mesenchymal Stem Cells Selected for Neural Specificity Ameliorates Chemotherapy Induced Hearing Loss and Pain Perception

Authors: Jan F. Talts, Amit Saxena, Kåre Engkilde

Abstract:

Chemotherapy-induced peripheral neuropathy (CIPN) is one of the most frequent side effects caused by anti-neoplastic agents, with a prevalence from 19 % to 85 %. Clinically, CIPN is a mostly sensory neuropathy leading to pain and to motor and autonomic changes. Due to its high prevalence among cancer patients, CIPN constitutes a major problem for both cancer patients and survivors, especially because currently, there is no single effective method of preventing CIPN. Hearing loss is the most common form of sensory impairment in humans and can be caused by ototoxic chemical compounds such as chemotherapy (platinum-based antineoplastic agents).In rodents, single or repeated cisplatin injections induce peripheral neuropathy and hearing impairment mimicking human disorder, allowing studying the efficacy of new pharmacological candidates in chemotherapy-induced hearing loss and peripheral neuropathy. RNA sequencing data from full term amniotic fluid (TAF) mesenchymal stemcell (MSC) clones was used to identify neural-specific markers present on TAF-MSC. Several prospective neural markers were tested by flow cytometry on cultured TAF-MSC. One of these markers was used for cell-sorting using Tyto MACSQuant cell sorter, and the neural marker positive cell population was expanded for several passages to the final therapeutic product stage. Peripheral neuropathy and hearing loss was induced in mice by administration of cisplatin in three week-long cycles. The efficacy of neural-specific TAF-MSC in treating hearing loss and pain perception was evaluated by administration of three injections of 3 million cells/kg by intravenous route or three injections of 3 million cells/kg by intra-arterial route after each cisplatin cycle treatment. Auditory brainstem responses (ABR) are electric potentials recorded from scalp electrodes, and the first ABR wave represents the summed activity of the auditory nerve fibers contacting the inner hair cells. For ABR studies, mice were anesthetized, then earphones were placed in the left ear of each mouse, an active electrode was placed in the vertex of the skull, a reference electrode under the skin of the mastoid bone, and a ground electrode in the neck skin. The stimuli consisted of tone pips of five frequencies (2, 4, 6, 12, 16, and 24 kHz) at various sound levels (from 0 to 90 dB) ranging to cover the mouse auditory frequency range. The von Frey test was used to assess the onset and maintenance of mechanical allodynia over time. Mice were placed in clear plexiglass cages on an elevated mesh floor and tested after 30 min of habituation. Mechanical paw withdrawal threshold was examined using an electronic von Frey anesthesiometer. Cisplatin groups treated with three injections of 3 million cells/kg by intravenous route and three injections of 3 million cells/kg by intra-arterial route after each cisplatin cycle treatment presented, a significant increase of hearing acuity characterized by a decrease of ABR threshold and a decrease of neuropathic pain characterized by an increase of von Frey paw withdrawal threshold compared to controls only receiving cisplatin. This study shows that treatment with MSCselected for neural specificity presents significant positive efficacy on the chemotherapy-induced neuropathic pain and the chemotherapy-induced hearing loss.

Keywords: mesenchymal stem cell, peripheral neuropathy, amniotic fluid, regenerative medicine

Procedia PDF Downloads 166
112 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 65