Search results for: voice recognition
2146 Speaker Recognition Using LIRA Neural Networks
Authors: Nestor A. Garcia Fragoso, Tetyana Baydyk, Ernst Kussul
Abstract:
This article contains information from our investigation in the field of voice recognition. For this purpose, we created a voice database that contains different phrases in two languages, English and Spanish, for men and women. As a classifier, the LIRA (Limited Receptive Area) grayscale neural classifier was selected. The LIRA grayscale neural classifier was developed for image recognition tasks and demonstrated good results. Therefore, we decided to develop a recognition system using this classifier for voice recognition. From a specific set of speakers, we can recognize the speaker’s voice. For this purpose, the system uses spectrograms of the voice signals as input to the system, extracts the characteristics and identifies the speaker. The results are described and analyzed in this article. The classifier can be used for speaker identification in security system or smart buildings for different types of intelligent devices.Keywords: extreme learning, LIRA neural classifier, speaker identification, voice recognition
Procedia PDF Downloads 1782145 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features
Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova
Abstract:
The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.Keywords: emotion recognition, facial recognition, signal processing, machine learning
Procedia PDF Downloads 3172144 Advanced Mouse Cursor Control and Speech Recognition Module
Authors: Prasad Kalagura, B. Veeresh kumar
Abstract:
We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis. The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit.Keywords: embedded ARM7 processor, mouse pointer control, voice recognition
Procedia PDF Downloads 5792143 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification
Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro
Abstract:
Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification
Procedia PDF Downloads 1182142 Integrated Gesture and Voice-Activated Mouse Control System
Authors: Dev Pratap Singh, Harshika Hasija, Ashwini S.
Abstract:
The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computers using hand gestures and voice commands. The system leverages advanced computer vision techniques using the Media Pipe framework and OpenCV to detect and interpret real-time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the speech recognition library allows for seamless execution of tasks like web searches, location navigation, and gesture control in the system through voice commands.Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks, natural language processing, voice assistant
Procedia PDF Downloads 142141 Voice Commands Recognition of Mentor Robot in Noisy Environment Using HTK
Authors: Khenfer-Koummich Fatma, Hendel Fatiha, Mesbahi Larbi
Abstract:
this paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a man-machine interface with a voice recognition system that allows the operator to tele-operate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands spoken in two languages: French and Arabic. The recognition rate obtained is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equal to 30 db, the Arabic speech recognition rate is 69% and 80% for French speech recognition rate. This can be explained by the ability of phonetic context of each speech when the noise is added.Keywords: voice command, HMM, TIMIT, noise, HTK, Arabic, speech recognition
Procedia PDF Downloads 3832140 Recognition of Voice Commands of Mentor Robot in Noisy Environment Using Hidden Markov Model
Authors: Khenfer Koummich Fatma, Hendel Fatiha, Mesbahi Larbi
Abstract:
This paper presents an approach based on Hidden Markov Models (HMM: Hidden Markov Model) using HTK tools. The goal is to create a human-machine interface with a voice recognition system that allows the operator to teleoperate a mentor robot to execute specific tasks as rotate, raise, close, etc. This system should take into account different levels of environmental noise. This approach has been applied to isolated words representing the robot commands pronounced in two languages: French and Arabic. The obtained recognition rate is the same in both speeches, Arabic and French in the neutral words. However, there is a slight difference in favor of the Arabic speech when Gaussian white noise is added with a Signal to Noise Ratio (SNR) equals 30 dB, in this case; the Arabic speech recognition rate is 69%, and the French speech recognition rate is 80%. This can be explained by the ability of phonetic context of each speech when the noise is added.Keywords: Arabic speech recognition, Hidden Markov Model (HMM), HTK, noise, TIMIT, voice command
Procedia PDF Downloads 3902139 Acoustic Analysis for Comparison and Identification of Normal and Disguised Speech of Individuals
Authors: Surbhi Mathur, J. M. Vyas
Abstract:
Although the rapid development of forensic speaker recognition technology has been conducted, there are still many problems to be solved. The biggest problem arises when the cases involving disguised voice samples come across for the purpose of examination and identification. Such type of voice samples of anonymous callers is frequently encountered in crimes involving kidnapping, blackmailing, hoax extortion and many more, where the speaker makes a deliberate effort to manipulate their natural voice in order to conceal their identity due to the fear of being caught. Voice disguise causes serious damage to the natural vocal parameters of the speakers and thus complicates the process of identification. The sole objective of this doctoral project is to find out the possibility of rendering definite opinions in cases involving disguised speech by experimentally determining the effects of different disguise forms on personal identification and percentage rate of speaker recognition for various voice disguise techniques such as raised pitch, lower pitch, increased nasality, covering the mouth, constricting tract, obstacle in mouth etc by analyzing and comparing the amount of phonetic and acoustic variation in of artificial (disguised) and natural sample of an individual, by auditory as well as spectrographic analysis.Keywords: forensic, speaker recognition, voice, speech, disguise, identification
Procedia PDF Downloads 3692138 Gesture-Controlled Interface Using Computer Vision and Python
Authors: Vedant Vardhan Rathour, Anant Agrawal
Abstract:
The project aims to provide a touchless, intuitive interface for human-computer interaction, enabling users to control their computer using hand gestures and voice commands. The system leverages advanced computer vision techniques using the MediaPipe framework and OpenCV to detect and interpret real time hand gestures, transforming them into mouse actions such as clicking, dragging, and scrolling. Additionally, the integration of a voice assistant powered by the Speech Recognition library allows for seamless execution of tasks like web searches, location navigation and gesture control on the system through voice commands.Keywords: gesture recognition, hand tracking, machine learning, convolutional neural networks
Procedia PDF Downloads 182137 Effect of Helium and Sulfur Hexafluoride Gas Inhalation on Voice Resonances
Authors: Pallavi Marathe
Abstract:
Voice is considered to be a unique biometric property of human beings. Unlike other biometric evidence, for example, fingerprints and retina scans, etc., voice can be easily changed or mimicked. The present paper talks about how the inhalation of helium and sulfur hexafluoride (SF6) gas affects the voice formant frequencies that are the resonant frequencies of the vocal tract. Helium gas is low-density gas; hence, the voice travels with a higher speed than that of air. On the other side in SF6 gas voice travels with lower speed than that of air due to its higher density. These results in decreasing the resonant frequencies of voice in helium and increasing in SF6. Results are presented with the help of Praat software, which is used for voice analysis.Keywords: voice formants, helium, sulfur hexafluoride, gas inhalation
Procedia PDF Downloads 1252136 Environmentally Adaptive Acoustic Echo Suppression for Barge-in Speech Recognition
Authors: Jong Han Joo, Jung Hoon Lee, Young Sun Kim, Jae Young Kang, Seung Ho Choi
Abstract:
In this study, we propose a novel technique for acoustic echo suppression (AES) during speech recognition under barge-in conditions. Conventional AES methods based on spectral subtraction apply fixed weights to the estimated echo path transfer function (EPTF) at the current signal segment and to the EPTF estimated until the previous time interval. We propose a new approach that adaptively updates weight parameters in response to abrupt changes in the acoustic environment due to background noises or double-talk. Furthermore, we devised a voice activity detector and an initial time-delay estimator for barge-in speech recognition in communication networks. The initial time delay is estimated using log-spectral distance measure, as well as cross-correlation coefficients. The experimental results show that the developed techniques can be successfully applied in barge-in speech recognition systems.Keywords: acoustic echo suppression, barge-in, speech recognition, echo path transfer function, initial delay estimator, voice activity detector
Procedia PDF Downloads 3752135 Features Dimensionality Reduction and Multi-Dimensional Voice-Processing Program to Parkinson Disease Discrimination
Authors: Djamila Meghraoui, Bachir Boudraa, Thouraya Meksen, M.Boudraa
Abstract:
Parkinson's disease is a pathology that involves characteristic perturbations in patients’ voices. This paper describes a proposed method that aims to diagnose persons with Parkinson (PWP) by analyzing on line their voices signals. First, Thresholds signals alterations are determined by the Multi-Dimensional Voice Program (MDVP). Principal Analysis (PCA) is exploited to select the main voice principal componentsthat are significantly affected in a patient. The decision phase is realized by a Mul-tinomial Bayes (MNB) Classifier that categorizes an analyzed voice in one of the two resulting classes: healthy or PWP. The prediction accuracy achieved reaching 98.8% is very promising.Keywords: Parkinson’s disease recognition, PCA, MDVP, multinomial Naive Bayes
Procedia PDF Downloads 2792134 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data
Authors: LuoJiaoyang, Yu Hongyang
Abstract:
In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.Keywords: multimodal, three modalities, RGB-D, identity verification
Procedia PDF Downloads 712133 Comparing Sounds of the Singing Voice
Authors: Christel Elisabeth Bonin
Abstract:
This experiment aims at showing that classical singing and belting have both different singing qualities, but singing with a speaking voice has no singing quality. For this purpose, a singing female voice was recorded on four different tone pitches, singing the vowel ‘a’ by using 3 different kinds of singing - classical trained voice, belting voice and speaking voice. The recordings have been entered in the Software Praat. Then the formants of each recorded tone were compared to each other and put in relationship to the singer’s formant. The visible results are taken as an indicator of comparable sound qualities of a classical trained female voice and a belting female voice concerning the concentration of overtones in F1 to F5 and a lack of sound quality in the speaking voice for singing purpose. The results also show that classical singing and belting are both valuable vocal techniques for singing due to their richness of overtones and that belting is not comparable to shouting or screaming. Singing with a speaking voice in contrast should not be called singing due to the lack of overtones which means by definition that there is no musical tone.Keywords: formants, overtone, singer’s formant, singing voice, belting, classical singing, singing with the speaking voice
Procedia PDF Downloads 3292132 Biometric Recognition Techniques: A Survey
Authors: Shabir Ahmad Sofi, Shubham Aggarwal, Sanyam Singhal, Roohie Naaz
Abstract:
Biometric recognition refers to an automatic recognition of individuals based on a feature vector(s) derived from their physiological and/or behavioral characteristic. Biometric recognition systems should provide a reliable personal recognition schemes to either confirm or determine the identity of an individual. These features are used to provide an authentication for computer based security systems. Applications of such a system include computer systems security, secure electronic banking, mobile phones, credit cards, secure access to buildings, health and social services. By using biometrics a person could be identified based on 'who she/he is' rather than 'what she/he has' (card, token, key) or 'what she/he knows' (password, PIN). In this paper, a brief overview of biometric methods, both unimodal and multimodal and their advantages and disadvantages, will be presented.Keywords: biometric, DNA, fingerprint, ear, face, retina scan, gait, iris, voice recognition, unimodal biometric, multimodal biometric
Procedia PDF Downloads 7562131 The Voice Rehabilitation Program Following Ileocolon Flap Transfer for Voice Reconstruction after Laryngectomy
Authors: Chi-Wen Huang, Hung-Chi Chen
Abstract:
Total laryngectomy affects swallowing, speech functions and life quality in the head and neck cancer. Voice restoration plays an important role in social activities and communication. Several techniques have been developed for voice restoration and reported to improve the life quality. However, the rehabilitation program for voice reconstruction by using the ileocolon flap still unclear. A retrospective study was done, and the patients' data were drawn from the medical records between 2010 and 2016 who underwent voice reconstruction by ileocolon flap after laryngectomy. All of them were trained to swallow first; then, the voice rehabilitation was started. The outcome of voice was evaluated after 6 months using the 4-point scoring scale. In our result, 9.8% patients could give very clear voice so everyone could understand their speech, 61% patients could be understood well by families and friends, 20.2% patients could only talk with family, and 9% patients had difficulty to be understood. Moreover, the 57% patients did not need a second surgery, but in 43% patients voice was made clear by a second surgery. In this study, we demonstrated that the rehabilitation program after voice reconstruction with ileocolon flap for post-laryngectomy patients is important because the anatomical structure is different from the normal larynx.Keywords: post-laryngectomy, ileocolon flap, rehabilitation, voice reconstruction
Procedia PDF Downloads 1592130 Patient-Friendly Hand Gesture Recognition Using AI
Authors: K. Prabhu, K. Dinesh, M. Ranjani, M. Suhitha
Abstract:
During the tough times of covid, those people who were hospitalized found it difficult to always convey what they wanted to or needed to the attendee. Sometimes the attendees might also not be there. In that case, the patients can use simple hand gestures to control electrical appliances (like its set it for a zero watts bulb)and three other gestures for voice note intimation. In this AI-based hand recognition project, NodeMCU is used for the control action of the relay, and it is connected to the firebase for storing the value in the cloud and is interfaced with the python code via raspberry pi. For three hand gestures, a voice clip is added for intimation to the attendee. This is done with the help of Google’s text to speech and the inbuilt audio file option in the raspberry pi 4. All the five gestures will be detected when shown with their hands via the webcam, which is placed for gesture detection. The personal computer is used for displaying the gestures and for running the code in the raspberry pi imager.Keywords: nodeMCU, AI technology, gesture, patient
Procedia PDF Downloads 1682129 Developed Text-Independent Speaker Verification System
Authors: Mohammed Arif, Abdessalam Kifouche
Abstract:
Speech is a very convenient way of communication between people and machines. It conveys information about the identity of the talker. Since speaker recognition technology is increasingly securing our everyday lives, the objective of this paper is to develop two automatic text-independent speaker verification systems (TI SV) using low-level spectral features and machine learning methods. (i) The first system is based on a support vector machine (SVM), which was widely used in voice signal processing with the aim of speaker recognition involving verifying the identity of the speaker based on its voice characteristics, and (ii) the second is based on Gaussian Mixture Model (GMM) and Universal Background Model (UBM) to combine different functions from different resources to implement the SVM based.Keywords: speaker verification, text-independent, support vector machine, Gaussian mixture model, cepstral analysis
Procedia PDF Downloads 582128 The Effect of the Hemispheres of the Brain and the Tone of Voice on Persuasion
Authors: Rica Jell de Laza, Jose Alberto Fernandez, Andrea Marie Mendoza, Qristin Jeuel Regalado
Abstract:
This study investigates whether participants experience different levels of persuasion depending on the hemisphere of the brain and the tone of voice. The experiment was performed on 96 volunteer undergraduate students taking an introductory course in psychology. The participants took part in a 2 x 3 (Hemisphere: left, right x Tone of Voice: positive, neutral, negative) Mixed Factorial Design to measure how much a person was persuaded. Results showed that the hemisphere of the brain and the tone of voice used did not significantly affect the results individually. Furthermore, there was no interaction effect. Therefore, the hemispheres of the brain and the tone of voice employed play insignificant roles in persuading a person.Keywords: dichotic listening, brain hemisphere, tone of voice, persuasion
Procedia PDF Downloads 3092127 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker
Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee
Abstract:
The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.Keywords: heat transfer, temperature, voice coil, woofer speaker
Procedia PDF Downloads 3602126 Automatic Speech Recognition Systems Performance Evaluation Using Word Error Rate Method
Authors: João Rato, Nuno Costa
Abstract:
The human verbal communication is a two-way process which requires a mutual understanding that will result in some considerations. This kind of communication, also called dialogue, besides the supposed human agents it can also be performed between human agents and machines. The interaction between Men and Machines, by means of a natural language, has an important role concerning the improvement of the communication between each other. Aiming at knowing the performance of some speech recognition systems, this document shows the results of the accomplished tests according to the Word Error Rate evaluation method. Besides that, it is also given a set of information linked to the systems of Man-Machine communication. After this work has been made, conclusions were drawn regarding the Speech Recognition Systems, among which it can be mentioned their poor performance concerning the voice interpretation in noisy environments.Keywords: automatic speech recognition, man-machine conversation, speech recognition, spoken dialogue systems, word error rate
Procedia PDF Downloads 3222125 The Functions of the Student Voice and Student-Centred Teaching Practices in Classroom-Based Music Education
Authors: Sofia Douklia
Abstract:
The present context paper aims to present the important role of ‘student voice’ and the music teacher in the classroom, which contributes to more student-centered music education. The aim is to focus on the functions of the student voice through the music spectrum, which has been born in the music classroom, and the teacher’s methodologies and techniques used in the music classroom. The music curriculum, the principles of student-centered music education, and the role of students and teachers as music ambassadors have been considered the major music parameters of student voice. The student- voice is a worth-mentioning aspect of a student-centered education, and all teachers should consider and promote its existence in their classroom.Keywords: student's voice, student-centered education, music ambassadors, music teachers
Procedia PDF Downloads 952124 Unsupervised Reciter Recognition Using Gaussian Mixture Models
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes an unsupervised text-independent probabilistic approach to recognize Quran reciter voice. It is an accurate approach that works on real time applications. This approach does not require a prior information about reciter models. It has two phases, where in the training phase the reciters' acoustical features are modeled using Gaussian Mixture Models, while in the testing phase, unlabeled reciter's acoustical features are examined among GMM models. Using this approach, a high accuracy results are achieved with efficient computation time process.Keywords: Quran, speaker recognition, reciter recognition, Gaussian Mixture Model
Procedia PDF Downloads 3822123 Voice over IP Quality of Service Evaluation for Mobile Ad Hoc Network in an Indoor Environment for Different Voice Codecs
Authors: Lina Abou Haibeh, Nadir Hakem, Ousama Abu Safia
Abstract:
In this paper, the performance and quality of Voice over IP (VoIP) calls carried over a Mobile Ad Hoc Network (MANET) which has a number of SIP nodes registered on a SIP Proxy are analyzed. The testing campaigns are carried out in an indoor corridor structure having a well-defined channel’s characteristics and model for the different voice codecs, G.711, G.727 and G.723.1. These voice codecs are commonly used in VoIP technology. The calls’ quality are evaluated using four Quality of Service (QoS) metrics, namely, mean opinion score (MOS), jitter, delay, and packet loss. The relationship between the wireless channel’s parameters and the optimum codec is well-established. According to the experimental results, the voice codec G.711 has the best performance for the proposed MANET topologyKeywords: wireless channel modelling, Voip, MANET, session initiation protocol (SIP), QoS
Procedia PDF Downloads 2292122 Reconceptualising the Voice of Children in Child Protection
Authors: Sharon Jackson, Lynn Kelly
Abstract:
This paper proposes a conceptual review of the interdisciplinary literature which has theorised the concept of ‘children’s voices’. The primary aim is to identify and consider the theoretical relevance of conceptual thought on ‘children’s voices’ for research and practice in child protection contexts. Attending to the ‘voice of the child’ has become a core principle of social work practice in contemporary child protection contexts. Discourses of voice permeate the legislative, policy and practice frameworks of child protection practices within the UK and internationally. Voice is positioned within a ‘child-centred’ moral imperative to ‘hear the voices’ of children and take their preferences and perspectives into account. This practice is now considered to be central to working in a child-centered way. The genesis of this call to voice is revealed through sociological analysis of twentieth-century child welfare reform as rooted inter alia in intersecting political, social and cultural discourses which have situated children and childhood as cites of state intervention as enshrined in the 1989 United Nations Convention on the Rights of the Child ratified by the UK government in 1991 and more specifically Article 12 of the convention. From a policy and practice perspective, the professional ‘capturing’ of children’s voices has come to saturate child protection practice. This has incited a stream of directives, resources, advisory publications and ‘how-to’ guides which attempt to articulate practice methods to ‘listen’, ‘hear’ and above all – ‘capture’ the ‘voice of the child’. The idiom ‘capturing the voice of the child’ is frequently invoked within the literature to express the requirements of the child-centered practice task to be accomplished. Despite the centrality of voice, and an obsession with ‘capturing’ voices, evidence from research, inspection processes, serious case reviews, child abuse and death inquires has consistently highlighted professional neglect of ‘the voice of the child’. Notable research studies have highlighted the relative absence of the child’s voice in social work assessment practices, a troubling lack of meaningful engagement with children and the need to more thoroughly examine communicative practices in child protection contexts. As a consequence, the project of capturing ‘the voice of the child’ has intensified, and there has been an increasing focus on developing methods and professional skills to attend to voice. This has been guided by a recognition that professionals often lack the skills and training to engage with children in age-appropriate ways. We argue however that the problem with ‘capturing’ and [re]representing ‘voice’ in child protection contexts is, more fundamentally, a failure to adequately theorise the concept of ‘voice’ in the ‘voice of the child’. For the most part, ‘The voice of the child’ incorporates psychological conceptions of child development. While these concepts are useful in the context of direct work with children, they fail to consider other strands of sociological thought, which position ‘the voice of the child’ within an agentic paradigm to emphasise the active agency of the child.Keywords: child-centered, child protection, views of the child, voice of the child
Procedia PDF Downloads 1392121 On Voice in English: An Awareness Raising Attempt on Passive Voice
Authors: Meral Melek Unver
Abstract:
This paper aims to explore ways to help English as a Foreign Language (EFL) learners notice and revise voice in English and raise their awareness of when and how to use active and passive voice to convey meaning in their written and spoken work. Because passive voice is commonly preferred in certain genres such as academic essays and news reports, despite the current trends promoting active voice, it is essential for learners to be fully aware of the meaning, use and form of passive voice to better communicate. The participants in the study are 22 EFL learners taking a one-year intensive English course at a university, who will receive English medium education (EMI) in their departmental studies in the following academic year. Data from students’ written and oral work was collected over a four-week period and the misuse or inaccurate use of passive voice was identified. The analysis of the data proved that they failed to make sensible decisions about when and how to use passive voice partly because the differences between their mother tongue and English and because they were not aware of the fact that active and passive voice would not alternate all the time. To overcome this, a Test-Teach-Test shape lesson, as opposed to a Present-Practice-Produce shape lesson, was designed and implemented to raise their awareness of the decisions they needed to make in choosing the voice and help them notice the meaning and use of passive voice through concept checking questions. The results first suggested that awareness raising activities on the meaning and use of voice in English would be beneficial in having accurate and meaningful outcomes from students. Also, helping students notice and renotice passive voice through carefully designed activities would help them internalize the use and form of it. As a result of the study, a number of activities are suggested to revise and notice passive voice as well as a short questionnaire to help EFL teachers to self-reflect on their teaching.Keywords: voice in English, test-teach-test, passive voice, English language teaching
Procedia PDF Downloads 2222120 Phone Number Spoofing Attack in VoLTE 4G
Authors: Joo-Hyung Oh
Abstract:
The number of service users of 4G VoLTE (voice over LTE) using LTE data networks is rapidly growing. VoLTE based on all-IP network enables clearer and higher-quality voice calls than 3G. It does, however, pose new challenges; a voice call through IP networks makes it vulnerable to security threats such as wiretapping and forged or falsified information. And in particular, stealing other users’ phone numbers and forging or falsifying call request messages from outgoing voice calls within VoLTE result in considerable losses that include user billing and voice phishing to acquaintances. This paper focuses on the threats of caller phone number spoofing in the VoLTE and countermeasure technology as safety measures for mobile communication networks.Keywords: LTE, 4G, VoLTE, phone number spoofing
Procedia PDF Downloads 4332119 Voice Signal Processing and Coding in MATLAB Generating a Plasma Signal in a Tesla Coil for a Security System
Authors: Juan Jimenez, Erika Yambay, Dayana Pilco, Brayan Parra
Abstract:
This paper presents an investigation of voice signal processing and coding using MATLAB, with the objective of generating a plasma signal on a Tesla coil within a security system. The approach focuses on using advanced voice signal processing techniques to encode and modulate the audio signal, which is then amplified and applied to a Tesla coil. The result is the creation of a striking visual effect of voice-controlled plasma with specific applications in security systems. The article explores the technical aspects of voice signal processing, the generation of the plasma signal, and its relationship to security. The implications and creative potential of this technology are discussed, highlighting its relevance at the forefront of research in signal processing and visual effect generation in the field of security systems.Keywords: voice signal processing, voice signal coding, MATLAB, plasma signal, Tesla coil, security system, visual effects, audiovisual interaction
Procedia PDF Downloads 952118 Phone Number Spoofing Attack in VoLTE
Authors: Joo-Hyung Oh, Sekwon Kim, Myoungsun Noh, Chaetae Im
Abstract:
The number of service users of 4G VoLTE (voice over LTE) using LTE data networks is rapidly growing. VoLTE based on All-IP network enables clearer and higher-quality voice calls than 3G. It does, however, pose new challenges; a voice call through IP networks makes it vulnerable to security threats such as wiretapping and forged or falsified information. Moreover, in particular, stealing other users’ phone numbers and forging or falsifying call request messages from outgoing voice calls within VoLTE result in considerable losses that include user billing and voice phishing to acquaintances. This paper focuses on the threats of caller phone number spoofing in the VoLTE and countermeasure technology as safety measures for mobile communication networks.Keywords: LTE, 4G, VoLTE, phone number spoofing
Procedia PDF Downloads 5242117 The Effect of Voice Recognition Dictation Software on Writing Quality in Third Grade Students: An Action Research Study
Authors: Timothy J. Grebec
Abstract:
This study investigated whether using a voice dictation software program (i.e., Google Voice Typing) has an impact on student writing quality. The research took place in a third-grade general education classroom in a suburban school setting. Because the study involved minors, all data was encrypted and deidentified before analysis. The students completed a series of writings prior to the beginning of the intervention to determine their thoughts and skill level with writing. During the intervention phase, the students were introduced to the voice dictation software, given an opportunity to practice using it, and then assigned writing prompts to be completed using the software. The prompts written by nineteen student participants and surveys of student opinions on writing established a baseline for the study. The data showed that using the dictation software resulted in a 34% increase in the response quality (compared to the Pennsylvania State Standardized Assessment [PSSA] writing guidelines). Of particular interest was the increase in students' proficiency in demonstrating mastery of the English language and conventions and elaborating on the content. Although this type of research is relatively no, it has the potential to reshape the strategies educators have at their disposal when instructing students on written language.Keywords: educational technology, accommodations, students with disabilities, writing instruction, 21st century education
Procedia PDF Downloads 75