Search results for: audio description
1269 Spatial Audio Player Using Musical Genre Classification
Authors: Jun-Yong Lee, Hyoung-Gook Kim
Abstract:
In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.Keywords: automatic equalization, genre classification, music segment detection, spatial audio processing
Procedia PDF Downloads 4291268 Online Delivery Approaches of Post Secondary Virtual Inclusive Media Education
Authors: Margot Whitfield, Andrea Ducent, Marie Catherine Rombaut, Katia Iassinovskaia, Deborah Fels
Abstract:
Learning how to create inclusive media, such as closed captioning (CC) and audio description (AD), in North America is restricted to the private sector, proprietary company-based training. We are delivering (through synchronous and asynchronous online learning) the first Canadian post-secondary, practice-based continuing education course package in inclusive media for broadcast production and processes. Despite the prevalence of CC and AD taught within the field of translation studies in Europe, North America has no comparable field of study. This novel approach to audio visual translation (AVT) education develops evidence-based methodology innovations, stemming from user study research with blind/low vision and Deaf/hard of hearing audiences for television and theatre, undertaken at Ryerson University. Knowledge outcomes from the courses include a) Understanding how CC/AD fit within disability/regulatory frameworks in Canada. b) Knowledge of how CC/AD could be employed in the initial stages of production development within broadcasting. c) Writing and/or speaking techniques designed for media. d) Hands-on practice in captioning re-speaking techniques and open source technologies, or in AD techniques. e) Understanding of audio production technologies and editing techniques. The case study of the curriculum development and deployment, involving first-time online course delivery from academic and practitioner-based instructors in introductory Captioning and Audio Description courses (CDIM 101 and 102), will compare two different instructors' approaches to learning design, including the ratio of synchronous and asynchronous classroom time and technological engagement tools on meeting software platform such as breakout rooms and polling. Student reception of these two different approaches will be analysed using qualitative thematic and quantitative survey analysis. Thus far, anecdotal conversations with students suggests that they prefer synchronous compared with asynchronous learning within our hands-on online course delivery method.Keywords: inclusive media theory, broadcasting practices, AVT post secondary education, respeaking, audio description, learning design, virtual education
Procedia PDF Downloads 1831267 Mathematical Model That Using Scrambling and Message Integrity Methods in Audio Steganography
Authors: Mohammed Salem Atoum
Abstract:
The success of audio steganography is to ensure imperceptibility of the embedded message in stego file and withstand any form of intentional or un-intentional degradation of message (robustness). Audio steganographic that utilized LSB of audio stream to embed message gain a lot of popularity over the years in meeting the perceptual transparency, robustness and capacity. This research proposes an XLSB technique in order to circumvent the weakness observed in LSB technique. Scrambling technique is introduce in two steps; partitioning the message into blocks followed by permutation each blocks in order to confuse the contents of the message. The message is embedded in the MP3 audio sample. After extracting the message, the permutation codebook is used to re-order it into its original form. Md5sum and SHA-256 are used to verify whether the message is altered or not during transmission. Experimental result shows that the XLSB performs better than LSB.Keywords: XLSB, scrambling, audio steganography, security
Procedia PDF Downloads 3631266 Freedom of Expression and Its Restriction in Audiovisual Media
Authors: Sevil Yildiz
Abstract:
Audio visual communication is a type of collective expression. Collective expression activity informs the masses, gives direction to opinions and establishes public opinion. Due to these characteristics, audio visual communication must be subjected to special restrictions. This has been stipulated in both the Constitution and the European Human Rights Agreement. This paper aims to review freedom of expression and its restriction in audio visual media. For this purpose, the authorisation of the Radio and Television Supreme Council to impose sanctions as an independent administrative authority empowered to regulate the field of audio visual communication has been reviewed with regard to freedom of expression and its limits.Keywords: audio visual media, freedom of expression, its limits, radio and television supreme council
Procedia PDF Downloads 3261265 The Influence of Audio on Perceived Quality of Segmentation
Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa
Abstract:
To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality
Procedia PDF Downloads 1161264 Audio Information Retrieval in Mobile Environment with Fast Audio Classifier
Authors: Bruno T. Gomes, José A. Menezes, Giordano Cabral
Abstract:
With the popularity of smartphones, mobile apps emerge to meet the diverse needs, however the resources at the disposal are limited, either by the hardware, due to the low computing power, or the software, that does not have the same robustness of desktop environment. For example, in automatic audio classification (AC) tasks, musical information retrieval (MIR) subarea, is required a fast processing and a good success rate. However the mobile platform has limited computing power and the best AC tools are only available for desktop. To solve these problems the fast classifier suits, to mobile environments, the most widespread MIR technologies, seeking a balance in terms of speed and robustness. At the end we found that it is possible to enjoy the best of MIR for mobile environments. This paper presents the results obtained and the difficulties encountered.Keywords: audio classification, audio extraction, environment mobile, musical information retrieval
Procedia PDF Downloads 5441263 Standardized Description and Modeling Methods of Semiconductor IP Interfaces
Authors: Seongsoo Lee
Abstract:
IP reuse is an effective design methodology for modern SoC design to reduce effort and time. However, description and modeling methods of IP interfaces are different due to different IP designers. In this paper, standardized description and modeling methods of IP interfaces are proposed. It consists of 11 items such as IP information, model provision, data type, description level, interface information, port information, signal information, protocol information, modeling level, modeling information, and source file. The proposed description and modeling methods enables easy understanding, simulation, verification, and modification in IP reuse.Keywords: interface, standardization, description, modeling, semiconductor IP
Procedia PDF Downloads 5021262 Genetic Algorithms for Feature Generation in the Context of Audio Classification
Authors: José A. Menezes, Giordano Cabral, Bruno T. Gomes
Abstract:
Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems.Keywords: feature generation, feature learning, genetic algorithm, music information retrieval
Procedia PDF Downloads 4341261 Mood Recognition Using Indian Music
Authors: Vishwa Joshi
Abstract:
The study of mood recognition in the field of music has gained a lot of momentum in the recent years with machine learning and data mining techniques and many audio features contributing considerably to analyze and identify the relation of mood plus music. In this paper we consider the same idea forward and come up with making an effort to build a system for automatic recognition of mood underlying the audio song’s clips by mining their audio features and have evaluated several data classification algorithms in order to learn, train and test the model describing the moods of these audio songs and developed an open source framework. Before classification, Preprocessing and Feature Extraction phase is necessary for removing noise and gathering features respectively.Keywords: music, mood, features, classification
Procedia PDF Downloads 4951260 Musical Tesla Coil Controlled by an Audio Signal Processed in Matlab
Authors: Sandra Cuenca, Danilo Santana, Anderson Reyes
Abstract:
The following project is based on the manipulation of audio signals through the Matlab software, which has an audio signal that is modified, and its resultant obtained through the auxiliary port of the computer is passed through a signal amplifier whose amplified signal is connected to a tesla coil which has a behavior like a vumeter, the flashes at the output of the tesla coil increase and decrease its intensity depending on the audio signal in the computer and also the voltage source from which it is sent. The amplified signal then passes to the tesla coil being shown in the plasma sphere with the respective flashes; this activation is given through the specified parameters that we want to give in the MATLAB algorithm that contains the digital filters for the manipulation of our audio signal sent to the tesla coil to be displayed in a plasma sphere with flashes of the combination of colors commonly pink and purple that varies according to the tone of the song.Keywords: auxiliary port, tesla coil, vumeter, plasma sphere
Procedia PDF Downloads 901259 Audio-Visual Aids and the Secondary School Teaching
Authors: Shrikrishna Mishra, Badri Yadav
Abstract:
In this complex society of today where experiences are innumerable and varied, it is not at all possible to present every situation in its original colors hence the opportunities for learning by actual experiences always are not at all possible. It is only through the use of proper audio visual aids that the life situation can be trough in the class room by an enlightened teacher in their simplest form and representing the original to the highest point of similarity which is totally absent in the verbal or lecture method. In the presence of audio aids, the attention is attracted interest roused and suitable atmosphere for proper understanding is automatically created, but in the existing traditional method greater efforts are to be made in order to achieve the aforesaid essential requisite. Inspire of the best and sincere efforts on the side of the teacher the net effect as regards understanding or learning in general is quite negligible.Keywords: Audio-Visual Aids, the secondary school teaching, complex society, audio
Procedia PDF Downloads 4821258 Effective Parameter Selection for Audio-Based Music Mood Classification for Christian Kokborok Song: A Regression-Based Approach
Authors: Sanchali Das, Swapan Debbarma
Abstract:
Music mood classification is developing in both the areas of music information retrieval (MIR) and natural language processing (NLP). Some of the Indian languages like Hindi English etc. have considerable exposure in MIR. But research in mood classification in regional language is very less. In this paper, powerful audio based feature for Kokborok Christian song is identified and mood classification task has been performed. Kokborok is an Indo-Burman language especially spoken in the northeastern part of India and also some other countries like Bangladesh, Myanmar etc. For performing audio-based classification task, useful audio features are taken out by jMIR software. There are some standard audio parameters are there for the audio-based task but as known to all that every language has its unique characteristics. So here, the most significant features which are the best fit for the database of Kokborok song is analysed. The regression-based model is used to find out the independent parameters that act as a predictor and predicts the dependencies of parameters and shows how it will impact on overall classification result. For classification WEKA 3.5 is used, and selected parameters create a classification model. And another model is developed by using all the standard audio features that are used by most of the researcher. In this experiment, the essential parameters that are responsible for effective audio based mood classification and parameters that do not significantly change for each of the Christian Kokborok songs are analysed, and a comparison is also shown between the two above model.Keywords: Christian Kokborok song, mood classification, music information retrieval, regression
Procedia PDF Downloads 2211257 A Study on the Improvement of Mobile Device Call Buzz Noise Caused by Audio Frequency Ground Bounce
Authors: Jangje Park, So Young Kim
Abstract:
The market demand for audio quality in mobile devices continues to increase, and audible buzz noise generated in time division communication is a chronic problem that goes against the market demand. In the case of time division type communication, the RF Power Amplifier (RF PA) is driven at the audio frequency cycle, and it makes various influences on the audio signal. In this paper, we measured the ground bounce noise generated by the peak current flowing through the ground network in the RF PA with the audio frequency; it was confirmed that the noise is the cause of the audible buzz noise during a call. In addition, a grounding method of the microphone device that can improve the buzzing noise was proposed. Considering that the level of the audio signal generated by the microphone device is -38dBV based on 94dB Sound Pressure Level (SPL), even ground bounce noise of several hundred uV will fall within the range of audible noise if it is induced by the audio amplifier. Through the grounding method of the microphone device proposed in this paper, it was confirmed that the audible buzz noise power density at the RF PA driving frequency was improved by more than 5dB under the conditions of the Printed Circuit Board (PCB) used in the experiment. A fundamental improvement method was presented regarding the buzzing noise during a mobile phone call.Keywords: audio frequency, buzz noise, ground bounce, microphone grounding
Procedia PDF Downloads 1361256 Audio-Visual Entrainment and Acupressure Therapy for Insomnia
Authors: Mariya Yeldhos, G. Hema, Sowmya Narayanan, L. Dhiviyalakshmi
Abstract:
Insomnia is one of the most prevalent psychological disorders worldwide. Some of the deficiencies of the current treatments of insomnia are: side effects in the case of sleeping pills and high costs in the case of psychotherapeutic treatment. In this paper, we propose a device which provides a combination of audio visual entrainment and acupressure based compression therapy for insomnia. This device provides drug-free treatment of insomnia through a user friendly and portable device that enables relaxation of brain and muscles, with certain advantages such as low cost, and wide accessibility to a large number of people. Tools adapted towards the treatment of insomnia: -Audio -Continuous exposure to binaural beats of a particular frequency of audible range -Visual -Flash of LED light -Acupressure points -GB-20 -GV-16 -B-10Keywords: insomnia, acupressure, entrainment, audio-visual entrainment
Procedia PDF Downloads 4291255 Perception of Value Affecting Engagement Through Online Audio Communication
Authors: Apipol Penkitti
Abstract:
The new normal or a new way of life stemmed from the COVID-19 outbreak, gave rise to a new form of social media: audio-based social platforms (ABSPs), known as Clubhouse, Twitter space, and Facebook live audio room. These platforms, on which audio-based communication is featured, became popular in a short span of time. The objective of the research study is to understand ABSPs users’ behaviors in Thailand. The study, in which functional attitude theory, uses and gratifications theory, and social influence theory are referred to, is conducted through consumer perceived utilitarian, hedonic, and social value that affect engagement. This research study is mixed method paradigm, utilizing Model of Triangulation as its framework. The data acquisition is proceeded through questionnaires from a sample of 384 male, female and LGBTQA+ individuals aged 25 - 34 who, from various occupations, have used audio-based social platform applications. This research study employs the structural equation modeling to analyze the relationships between variables, and it uses the semi - structured interviewing to comprehend the rationality of the variables in the study. The study found that hedonic value directly affects engagement.Keywords: audio based social platform, engagement, hedonic, perceived value, social, utilitarian
Procedia PDF Downloads 1261254 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Authors: Analise Borg, Paul Micallef
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7
Procedia PDF Downloads 4211253 Digital Recording System Identification Based on Audio File
Authors: Michel Kulhandjian, Dimitris A. Pados
Abstract:
The objective of this work is to develop a theoretical framework for reliable digital recording system identification from digital audio files alone, for forensic purposes. A digital recording system consists of a microphone and a digital sound processing card. We view the cascade as a system of unknown transfer function. We expect same manufacturer and model microphone-sound card combinations to have very similar/near identical transfer functions, bar any unique manufacturing defect. Input voice (or other) signals are modeled as non-stationary processes. The technical problem under consideration becomes blind deconvolution with non-stationary inputs as it manifests itself in the specific application of digital audio recording equipment classification.Keywords: blind system identification, audio fingerprinting, blind deconvolution, blind dereverberation
Procedia PDF Downloads 3041252 Audio-Visual Recognition Based on Effective Model and Distillation
Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin
Abstract:
Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.Keywords: lipreading, audio-visual, Efficientnet, distillation
Procedia PDF Downloads 1341251 Satisfaction of Distance Education University Students with the Use of Audio Media as a Medium of Instruction: The Case of Mountains of the Moon University in Uganda
Authors: Mark Kaahwa, Chang Zhu, Moses Muhumuza
Abstract:
This study investigates the satisfaction of distance education university students (DEUS) with the use of audio media as a medium of instruction. Studying students’ satisfaction is vital because it shows whether learners are comfortable with a certain instructional strategy or not. Although previous studies have investigated the use of audio media, the satisfaction of students with an instructional strategy that combines radio teaching and podcasts as an independent teaching strategy has not been fully investigated. In this study, all lectures were delivered through the radio and students had no direct contact with their instructors. No modules or any other material in form of text were given to the students. They instead, revised the taught content by listening to podcasts saved on their mobile electronic gadgets. Prior to data collection, DEUS received orientation through workshops on how to use audio media in distance education. To achieve objectives of the study, a survey, naturalistic observations and face-to-face interviews were used to collect data from a sample of 211 undergraduate and graduate students. Findings indicate that there was no statistically significant difference in the levels of satisfaction between male and female students. The results from post hoc analysis show that there is a statistically significant difference in the levels of satisfaction regarding the use of audio media between diploma and graduate students. Diploma students are more satisfied compared to their graduate counterparts. T-test results reveal that there was no statistically significant difference in the general satisfaction with audio media between rural and urban-based students. And ANOVA results indicate that there is no statistically significant difference in the levels of satisfaction with the use of audio media across age groups. Furthermore, results from observations and interviews reveal that DEUS found learning using audio media a pleasurable medium of instruction. This is an indication that audio media can be considered as an instructional strategy on its own merit.Keywords: audio media, distance education, distance education university students, medium of instruction, satisfaction
Procedia PDF Downloads 1201250 Robust and Transparent Spread Spectrum Audio Watermarking
Authors: Ali Akbar Attari, Ali Asghar Beheshti Shirazi
Abstract:
In this paper, we propose a blind and robust audio watermarking scheme based on spread spectrum in Discrete Wavelet Transform (DWT) domain. Watermarks are embedded in the low-frequency coefficients, which is less audible. The key idea is dividing the audio signal into small frames, and magnitude of the 6th level of DWT approximation coefficients is modifying based upon the Direct Sequence Spread Spectrum (DSSS) technique. Also, the psychoacoustic model for enhancing in imperceptibility, as well as Savitsky-Golay filter for increasing accuracy in extraction, is used. The experimental results illustrate high robustness against most common attacks, i.e. Gaussian noise addition, Low pass filter, Resampling, Requantizing, MP3 compression, without significant perceptual distortion (ODG is higher than -1). The proposed scheme has about 83 bps data payload.Keywords: audio watermarking, spread spectrum, discrete wavelet transform, psychoacoustic, Savitsky-Golay filter
Procedia PDF Downloads 1991249 Using Audio-Visual Aids and Computer-Assisted Language Instruction (CALI) to Overcome Learning Difficulties of Listening in Students of Special Needs
Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Ayman Al Yaari, Montaha Al Yaari, Adham Al Yaari, Sajedah Al Yaari, Fatehi Eissa
Abstract:
Background & Aims: Audio-visual aids and computer-aided language instruction (CALI) have been documented to improve receptive skills, namely listening skills, in normal students. The increased listening has been attributed to the understanding of other interlocutors' speech, but recent experiments have suggested that audio-visual aids and CALI should be tested against the listening of students of special needs to see the effects of the former in the latter. This investigation described the effect of audio-visual aids and CALI on the performance of these students. Methods: Pre-and-posttests were administered to 40 students of special needs of both sexes at al-Malādh school for students of special needs aged between 8 and 18 years old. A comparison was held between this group of students and another similar group (control group). Whereas the former group underwent a listening course using audio-visual aids and CALI, the latter studied the same course with the same speech language therapist (SLT) with the classical method. The outcomes of the two tests for the two groups were qualitatively and quantitatively analyzed. Results: Significant improvement in the performance was found in the first group (treatment group) (posttest= 72.45% vs. pre-test= 25.55%) in comparison to the second (control) (posttest= 25.55% vs. pre-test= 23.72%). In comparison to the males’ scores, the scores of females are higher (1487 scores vs. 1411 scores). Suggested results support the necessity of the use of audio-visual aids and CALI in teaching listening at the schools of students of special needs.Keywords: listening, receptive skills, audio-visual aids, CALI, special needs
Procedia PDF Downloads 481248 Multi-Level Pulse Width Modulation to Boost the Power Efficiency of Switching Amplifiers for Analog Signals with Very High Crest Factor
Authors: Jan Doutreloigne
Abstract:
The main goal of this paper is to develop a switching amplifier with optimized power efficiency for analog signals with a very high crest factor such as audio or DSL signals. Theoretical calculations show that a switching amplifier architecture based on multi-level pulse width modulation outperforms all other types of linear or switching amplifiers in that respect. Simulations on a 2 W multi-level switching audio amplifier, designed in a 50 V 0.35 mm IC technology, confirm its superior performance in terms of power efficiency. A real silicon implementation of this audio amplifier design is currently underway to provide experimental validation.Keywords: audio amplifier, multi-level switching amplifier, power efficiency, pulse width modulation, PWM, self-oscillating amplifier
Procedia PDF Downloads 3421247 Implementation and Performance Analysis of Data Encryption Standard and RSA Algorithm with Image Steganography and Audio Steganography
Authors: S. C. Sharma, Ankit Gambhir, Rajeev Arya
Abstract:
In today’s era data security is an important concern and most demanding issues because it is essential for people using online banking, e-shopping, reservations etc. The two major techniques that are used for secure communication are Cryptography and Steganography. Cryptographic algorithms scramble the data so that intruder will not able to retrieve it; however steganography covers that data in some cover file so that presence of communication is hidden. This paper presents the implementation of Ron Rivest, Adi Shamir, and Leonard Adleman (RSA) Algorithm with Image and Audio Steganography and Data Encryption Standard (DES) Algorithm with Image and Audio Steganography. The coding for both the algorithms have been done using MATLAB and its observed that these techniques performed better than individual techniques. The risk of unauthorized access is alleviated up to a certain extent by using these techniques. These techniques could be used in Banks, RAW agencies etc, where highly confidential data is transferred. Finally, the comparisons of such two techniques are also given in tabular forms.Keywords: audio steganography, data security, DES, image steganography, intruder, RSA, steganography
Procedia PDF Downloads 2901246 Agricultural Education by Media in Yogyakarta, Indonesia
Authors: Retno Dwi Wahyuningrum, Sunarru Samsi Hariadi
Abstract:
Education in agriculture is very significant; in a way that it can support farmers to improve their business. This can be done through certain media, such as printed, audio, and audio-visual media. To find out the effects of the media toward the knowledge, attitude, and motivation of farmers in order to adopt innovation, the study was conducted on 342 farmers, randomly selected from 12 farmer-groups, in the districts of Sleman and Bantul, Special Region of Yogyakarta Province. The study started from October 2014 to November 2015 by interviewing the respondents using a questionnaire which included 20 questions on knowledge, 20 questions on attitude, and 20 questions on adopting motivation. The data for the attitude and the adopting motivation were processed into Likert scale, then it was tested for validity and reliability. Differences in the levels of knowledge, attitude, and motivation were tested based on percentage of average score intervals of them and categorized into five interpretation levels. The results show that printed, audio, and audio-visual media give different impacts to the farmers. First, all media make farmers very aware to agricultural innovation, but the highest percentage is on theatrical play. Second, the most effective media to raise the attitude is interactive dialogue on Radio. Finally, printed media, especially comic, is the most effective way to improve the adopting motivation of farmers.Keywords: agricultural education, printed media, audio media, audio-visual media, farmer knowledge, farmer attitude, farmer adopting motivation
Procedia PDF Downloads 2111245 Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Reading in Students of Special Needs
Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari
Abstract:
Background & aims: Reading is a receptive skill whose importance could involve abilities' variance from linguistic standard. Several evidences support the hypothesis stating that the more you read the better you write, with a different impact for speech language therapists (SLTs) who use audio-visual aids and computer-assisted language instruction (CALI) and those who do not. Methods: Here we made use of audio-visual aids and CALI for teaching reading skill to a group of 40 students of special needs of both sexes (range between 8 and 18 years old) at al-Malādh school for teaching students of special needs in Dhamar (Yemen) while another group of the same number is taught using ordinary teaching methods. Pre-and-posttests have been administered at the beginning and the end of the semester (Before and after teaching the reading course). The purpose was to understand the differences between the levels of the students of special needs to see to what extent audio-visual aids and CALI are useful for them. The two groups were taught by the same instructor under the same circumstances in the same school. Both quantitative and qualitative procedures were used to analyze the data. Results: The overall findings revealed that audio-visual aids and CALI are very useful for teaching reading to students of special needs and this can be seen in the scores of the treatment group’s subjects (7.0%, in post-test vs.2.5% in pre-test). In comparison to the scores of the second group’s subjects (where audio-visual aids and CALI were not used) (2.2% in both pre-and-posttests), the first group subjects have overcome reading tasks and this can be observed in their performance in the posttest. Compared with males, females’ performance was better (1466 scores (7.3%) vs. 1371 scores (6.8%). Qualitative and statistical analyses showed that such comprehension is absolutely due to the use of audio-visual aids and CALI and nothing else. These outcomes confirm the evidence of the significance of using audio-visual aids and CALI as effective means for teaching receptive skills in general and reading skill in particular.Keywords: reading, receptive skills, audio-visual aids, CALI, students, special needs, SLTs
Procedia PDF Downloads 481244 Designing Space through Narratives: The Role of the Tour Description in the Architectural Design Process
Authors: A. Papadopoulou
Abstract:
When people are asked to provide an oral description of a space they usually provide a Tour description, which is a dynamic type of spatial narrative centered on the narrator’s body, rather than a Map description, which is a static type of spatial narrative focused on the organization of the space as seen from above. Also, subjects with training in the architecture discipline tend to adopt a Tour perspective of space when the narrative refers to a space they have actually experienced but tend to adopt a Map perspective when the narrative refers to a space they have merely imagined. This pilot study aims to investigate whether the Tour description, which is the most common mode in the oral descriptions of experienced space, is a cognitive perspective taken in the process of designing a space. The study investigates whether a spatial description provided by a subject with architecture training in the type of a Tour description would be accurately translated into a spatial layout by other subjects with architecture training. The subjects were given the Tour description in written form and were asked to make a plan drawing of the described space. The results demonstrate that when we conceive and design space we do not adopt the same rules and cognitive patterns that we adopt when we reconstruct space from our memory. As shown by the results of this pilot study, the rules that underlie the Tour description were not detected in the translation from narratives to drawings. In a different phase, the study also investigates how would subjects with architecture training describe space when forced to take a Tour perspective in their oral description of a space. The results of this second phase demonstrate that if intentionally taken, the Tour perspective leads to descriptions of space that are more detailed and focused on experiential aspects.Keywords: architecture, design process, embodied cognition, map description, oral narratives, tour description
Procedia PDF Downloads 1581243 Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features
Authors: Hrishi Rakshit, Pooneh Bagheri Zadeh
Abstract:
This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods.Keywords: drone classifications, deep convolutional neural network, hyperparameters, drone audio signal
Procedia PDF Downloads 1041242 Musical Tesla Coil with Faraday Box Controlled by a GNU Radio
Authors: Jairo Vega, Fabian Chamba, Jordy Urgiles
Abstract:
In this work, the implementation of a Matlabcontrolled Musical Tesla Coil and external audio signals was presented. First, the audio signal was obtained from a mobile device and processed in Matlab to modify it, adding noise or other desired effects. Then, the processed signal was passed through a preamplifier to increase its amplitude to a level suitable for further amplification through a power amplifier, which was part of the current driver circuit of the Tesla coil. To get the Tesla coil to generate music, a circuit capable of modulating and generating the audio signal by manipulating electrical discharges was used. To visualize and listen to these discharges, a small Faraday cage was built to attenuate the external electric fields. Finally, the implementation of the musical Tesla coil was concluded. However, it was observed that the audio signal volume was very low, and the components used heated up quickly. Due to these limitations, it was determined that the project could not be connected to power for long periods of time.Keywords: Tesla coil, plasma, electrical signals, GNU Radio
Procedia PDF Downloads 971241 Digital Musical Organology: The Audio Games: The Question of “A-Musicological” Interfaces
Authors: Hervé Zénouda
Abstract:
This article seeks to shed light on an emerging creative field: "Audio games," at the crossroads between video games and computer music. Indeed, many applications, which propose entertaining audio-visual experiences with the objective of musical creation, are available today for different supports (game consoles, computers, cell phones). The originality of this field is the use of the gameplay of video games applied to music composition. Thus, composing music using interfaces but also cognitive logics that we qualify as "a-musicological" seem to us particularly interesting from the perspective of musical digital organology. This field raises questions about the representation of sound and musical structures and develops new instrumental gestures and strategies of musical composition. We will try in this article to define the characteristics of this field by highlighting some historical milestones (abstract cinema, game theory in music, actions, and graphic scores) as well as the novelties brought by digital technologies.Keywords: audio-games, video games, computer generated music, gameplay, interactivity, synesthesia, sound interfaces, relationships image/sound, audiovisual music
Procedia PDF Downloads 1121240 Using Audio-Visual Aids and Computer-Assisted Language Instruction to Overcome Learning Difficulties of Sound System in Students of Special Needs
Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari
Abstract:
Background & Objectives: Audio-visual aids and computer-assisted language instruction (CALI) effects are strong in teaching language components (sound system, grammatical structures and vocabulary) to students of special needs. To explore the effects of the audio-visual aids and CALI in teaching sound system to this class of students by speech language therapists (SLTs), an experiment has been undertaken to evaluate their performance during their study of the sound system course. Methods: Forty students (males and females) of special needs at al-Malādh school for teaching students of special needs in Dhamar (Yemen) range between 8 and 18 years old underwent this experimental study while they were studying language sound system course. Pre-and-posttests have been administered at the begging and end of the semester. Students' treatment was compared to a similar group (control group) of the same number under the same environment. Whereas the first group was taught using audio-visual aids and CALI, the second was not. Students' performances were linguistically and statistically evaluated. Results & conclusions: Compared with the control group, the treatment group showed significantly higher scores in the posttest (72.32% vs. 31%). Compared with females, males scored higher marks (1421 vs. 1472). Thus, we should take the audio-visual aids and CALI into consideration in teaching sound system to students of special needs.Keywords: language components, sound system, audio-visual aids, CALI, students, special needs, SLTs
Procedia PDF Downloads 46