Search results for: audio/visual peer learning
1157 The Role of the Tehran Conservatory Program in Providing a Supportive, Adaptable Music Learning Environment for Children with Autism Spectrum Disorder and Their Families
Authors: Ailin Agaahi, Nafise Daneshvar Hoseini, Shahnaz Tamizi, Mehrdad Sabet
Abstract:
Music education has been recognized as a valuable therapeutic and educational intervention for children with Autism Spectrum Disorder (ASD). This study explores the experiences and perceptions of parents whose children with ASD have participated in music lessons at the Tehran Conservatory. The aim is to understand the impacts and barriers of this educational approach, providing insights into the real-world experiences of families integrating music into the lives of their children. Qualitative research was conducted through in-depth interviews with parents of children with ASD enrolled in the Tehran Conservatory's music program. The interviews examined parental motivations, observations of their child's progress, and evaluations of the program's effectiveness. Preliminary findings suggest that the music program positively impacts social interaction, emotional regulation, and communication. Parents highlighted the program's adaptability to meet the unique needs of children with ASD and the supportive environment fostered by specialized instructors. However, several barriers were identified, including the need for greater awareness and acceptance of music education for children with ASD and the limited availability of similar programs in the region. This research contributes valuable insights from parents and caregivers, emphasizing the importance of inclusive and effective music programs to support the needs of children with ASD and their families.Keywords: autism spectrum disorder, music education, therapeutic intervention, parental perspectives
Procedia PDF Downloads 241156 A Methodology for Automatic Diversification of Document Categories
Authors: Dasom Kim, Chen Liu, Myungsu Lim, Su-Hyeon Jeon, ByeoungKug Jeon, Kee-Young Kwahk, Namgyu Kim
Abstract:
Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we previously proposed a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. In this paper, we design a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.Keywords: big data analysis, document classification, multi-category, text mining, topic analysis
Procedia PDF Downloads 2761155 „Real and Symbolic in Poetics of Multiplied Screens and Images“
Authors: Kristina Horvat Blazinovic
Abstract:
In the context of a work of art, one can talk about the idea-concept-term-intention expressed by the artist by using various forms of repetition (external, material, visible repetition). Such repetitions of elements (images in space or moving visual and sound images in time) suggest a "covert", "latent" ("dressed") repetition – i.e., "hidden", "latent" term-intention-idea. Repeating in this way reveals a "deeper truth" that the viewer needs to decode and which is hidden "under" the technical manifestation of the multiplied images. It is not only images, sounds, and screens that are repeated - something else is repeated through them as well, even if, in some cases, the very idea of repetition is repeated. This paper examines serial images and single-channel or multi-channel artwork in the field of video/film art and video installations, which in a way implies the concept of repetition and multiplication. Moving or static images and screens (as multi-screens) are repeated in time and space. The categories of the real and the symbolic partly refer to the Lacan registers of reality, i.e., the Imaginary - Symbolic – Real trinity that represents the orders within which human subjectivity is established. Authors such as Bruce Nauman, VALIE EXPORT, Ragnar Kjartansson, Wolf Vostell, Shirin Neshat, Paul Sharits, Harun Farocki, Dalibor Martinis, Andy Warhol, Douglas Gordon, Bill Viola, Frank Gillette, and Ira Schneider, and Marina Abramovic problematize, in different ways, the concept and procedures of multiplication - repetition, but not in the sense of "copying" and "repetition" of reality or the original, but of repeated repetitions of the simulacrum. Referential works of art are often connected by the theme of the traumatic. Repetitions of images and situations are a response to the traumatic (experience) - repetition itself is a symptom of trauma. On the other hand, repeating and multiplying traumatic images results in a new traumatic effect or cancels it. Reflections on repetition as a temporal and spatial phenomenon are in line with the chapters that link philosophical considerations of space and time and experience temporality with their manifestation in works of art. The observations about time and the relation of perception and memory are according to Henry Bergson and his conception of duration (durée) as "quality of quantity." The video works intended to be displayed as a video loop, express the idea of infinite duration ("pure time," according to Bergson). The Loop wants to be always present - to fixate in time. Wholeness is unrecognizable because the intention is to make the effect infinitely cyclic. Reflections on time and space end with considerations about the occurrence and effects of time and space intervals as places and moments "between" – the points of connection and separation, of continuity and stopping - by reference to the "interval theory" of Soviet filmmaker DzigaVertov. The scale of opportunities that can be explored in interval mode is wide. Intervals represent the perception of time and space in the form of pauses, interruptions, breaks (e.g., emotional, dramatic, or rhythmic) denote emptiness or silence, distance, proximity, interstitial space, or a gap between various states.Keywords: video installation, performance, repetition, multi-screen, real and symbolic, loop, video art, interval, video time
Procedia PDF Downloads 1761154 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality
Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye
Abstract:
When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.Keywords: word embeddings, k-mer embedding, dimensionality reduction
Procedia PDF Downloads 1441153 The Changing Role of the Chief Academic Officer in American Higher Education: Causes and Consequences
Authors: Michael W. Markowitz, Jeffrey Gingerich
Abstract:
The landscape of higher education in the United States has undergone significant changes in the last 25 years. What was once a domain of competition among prospective students for a limited number of college and university seats has become a marketplace in which institutions vie for the enrollment of educational consumers. A central figure in this paradigm shift has been the Chief Academic Officer (CAO), whose institutional role has also evolved beyond academics to include such disparate responsibilities as strategic planning, fiscal oversight, student recruitment, fundraising and personnel management. This paper explores the scope and impact of this transition by, first, explaining its context: the intersection of key social, economic and political factors in neo-conservative, late 20th Century America that redefined the value and accountability of institutions of higher learning. This context, in turn, is shown to have redefined the role and function of the CAO from a traditional academic leader to one centered on the successful application of corporate principles of organizational and fiscal management. Information gathered from a number of sitting Provosts, Vice-Presidents of Academic Affairs and Deans of Faculty is presented to illustrate the parameters of this change, as well as the extent to which today’s academic officers feel prepared and equipped to fulfill this broader institutional role. The paper concludes with a discussion of the impact of this transition on the American academy and whether it serves as a portend of change to come in higher education systems around the globe.Keywords: academic administration, higher education, leadership, organizational management
Procedia PDF Downloads 2221152 Prediction of Distillation Curve and Reid Vapor Pressure of Dual-Alcohol Gasoline Blends Using Artificial Neural Network for the Determination of Fuel Performance
Authors: Leonard D. Agana, Wendell Ace Dela Cruz, Arjan C. Lingaya, Bonifacio T. Doma Jr.
Abstract:
The purpose of this paper is to study the predict the fuel performance parameters, which include drivability index (DI), vapor lock index (VLI), and vapor lock potential using distillation curve and Reid vapor pressure (RVP) of dual alcohol-gasoline fuel blends. Distillation curve and Reid vapor pressure were predicted using artificial neural networks (ANN) with macroscopic properties such as boiling points, RVP, and molecular weights as the input layers. The ANN consists of 5 hidden layers and was trained using Bayesian regularization. The training mean square error (MSE) and R-value for the ANN of RVP are 91.4113 and 0.9151, respectively, while the training MSE and R-value for the distillation curve are 33.4867 and 0.9927. Fuel performance analysis of the dual alcohol–gasoline blends indicated that highly volatile gasoline blended with dual alcohols results in non-compliant fuel blends with D4814 standard. Mixtures of low-volatile gasoline and 10% methanol or 10% ethanol can still be blended with up to 10% C3 and C4 alcohols. Intermediate volatile gasoline containing 10% methanol or 10% ethanol can still be blended with C3 and C4 alcohols that have low RVPs, such as 1-propanol, 1-butanol, 2-butanol, and i-butanol. Biography: Graduate School of Chemical, Biological, and Materials Engineering and Sciences, Mapua University, Muralla St., Intramuros, Manila, 1002, PhilippinesKeywords: dual alcohol-gasoline blends, distillation curve, machine learning, reid vapor pressure
Procedia PDF Downloads 1051151 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 751150 Multimodal Sentiment Analysis With Web Based Application
Authors: Shreyansh Singh, Afroz Ahmed
Abstract:
Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential.Keywords: sentiment analysis, RNN, LSTM, word embeddings
Procedia PDF Downloads 1241149 Animation: A Footpath for Enhanced Awareness Creation on Malaria Prevention in Rural Communities
Authors: Stephen Osei Akyiaw, Divine Kwabena Atta Kyere-Owusu
Abstract:
Malaria has been a worldwide menace of a health condition to human beings for several decades with majority of people on the African continent with most causalities where Ghana is no exception. Therefore, this study employed the use of animation to enhance awareness creation on the spread and prevention of Malaria in Effutu Communities in the Central Region of Ghana. Working with the interpretivist paradigm, this study adopted Art-Based Research, where the AIDA Model and Cognitive Theory of Multimedia Learning (CTML) served as the theories underpinning the study. Purposive and convenience sampling techniques were employed in selecting sample for the study. The data collection instruments included document review and interviews. Besides, the study developed an animation using the local language of the people as the voice over to foster proper understanding by the rural community folks. Also, indigenous characters were used for the animation for the purpose of familiarization with the local folks. The animation was publicized at Health Town Halls within the communities. The outcomes of the study demonstrated that the use of animation was effective in enhancing the awareness creation for preventing and controlling malaria disease in rural communities in Effutu Communities in the Central Region of Ghana. Health officers and community folks expressed interest and desire to practice the preventive measures outlined in the animation to help reduce the spread of Malaria in their communities. The study, therefore, recommended that animation could be used to curtail the spread and enhanced the prevention of Malaria.Keywords: malaria, animation, prevention, communities
Procedia PDF Downloads 911148 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite
Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy
Abstract:
This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite
Procedia PDF Downloads 1571147 Student Teachers' Experiences and Perceptions of a Curriculum Designed to Promote Social Justice
Authors: Emma Groenewald
Abstract:
In 1994, numerous policies of a democratic dispensation envisage social justice and the transformation of the South Africa society. The drive for transformation and social justice resulted in an increasing number of university students from diverse backgrounds, which in turn, lead to the establishment of Sol Plaatje University (SPU) in 2014. A re-curriculated B. Ed. programme at SPU aims to equip students with knowledge and skills to realise the aim of social justice and to enhance the transformation of the South African society. The aim of this study is to explore the experiences and perceptions of students at a diverse university campus on a curriculum that aims to promote social justice. Four education modules, with the assumption that it reflects social justice content, were selected. Four students, representative of different ethnic and language groupings found at the SPU, were chosen as participants. Data were generated by the participants through four reflective exercises on each of the modules, spread over a period of four years. The module aims, linked with the narratives of the participants' perceptions and experiences of each module, provided an overview of the enacted curriculum. A qualitative research design with an interpretivist approach informed by Vygotsky's theory of learning was used. The participants' experiences of the four modules were analysed, and their views were interpreted. The students' narratives shed light on the strengths and weaknesses of how the B.Ed. Curriculum works towards social justice and revealed student's perceptions of otherness. From the narratives it became apparent that module did promote a social justice orientation in prospective teachers trained at a university.Keywords: student diversity, social justice, transformation, teacher education
Procedia PDF Downloads 1441146 A Convolutional Neural Network-Based Model for Lassa fever Virus Prediction Using Patient Blood Smear Image
Authors: A. M. John-Otumu, M. M. Rahman, M. C. Onuoha, E. P. Ojonugwa
Abstract:
A Convolutional Neural Network (CNN) model for predicting Lassa fever was built using Python 3.8.0 programming language, alongside Keras 2.2.4 and TensorFlow 2.6.1 libraries as the development environment in order to reduce the current high risk of Lassa fever in West Africa, particularly in Nigeria. The study was prompted by some major flaws in existing conventional laboratory equipment for diagnosing Lassa fever (RT-PCR), as well as flaws in AI-based techniques that have been used for probing and prognosis of Lassa fever based on literature. There were 15,679 blood smear microscopic image datasets collected in total. The proposed model was trained on 70% of the dataset and tested on 30% of the microscopic images in avoid overfitting. A 3x3x3 convolution filter was also used in the proposed system to extract features from microscopic images. The proposed CNN-based model had a recall value of 96%, a precision value of 93%, an F1 score of 95%, and an accuracy of 94% in predicting and accurately classifying the images into clean or infected samples. Based on empirical evidence from the results of the literature consulted, the proposed model outperformed other existing AI-based techniques evaluated. If properly deployed, the model will assist physicians, medical laboratory scientists, and patients in making accurate diagnoses for Lassa fever cases, allowing the mortality rate due to the Lassa fever virus to be reduced through sound decision-making.Keywords: artificial intelligence, ANN, blood smear, CNN, deep learning, Lassa fever
Procedia PDF Downloads 1251145 A Questionnaire-Based Survey: Therapists Response towards Upper Limb Disorder Learning Tool
Authors: Noor Ayuni Che Zakaria, Takashi Komeda, Cheng Yee Low, Kaoru Inoue, Fazah Akhtar Hanapiah
Abstract:
Previous studies have shown that there are arguments regarding the reliability and validity of the Ashworth and Modified Ashworth Scale towards evaluating patients diagnosed with upper limb disorders. These evaluations depended on the raters’ experiences. This initiated us to develop an upper limb disorder part-task trainer that is able to simulate consistent upper limb disorders, such as spasticity and rigidity signs, based on the Modified Ashworth Scale to improve the variability occurring between raters and intra-raters themselves. By providing consistent signs, novice therapists would be able to increase training frequency and exposure towards various levels of signs. A total of 22 physiotherapists and occupational therapists participated in the study. The majority of the therapists agreed that with current therapy education, they still face problems with inter-raters and intra-raters variability (strongly agree 54%; n = 12/22, agree 27%; n = 6/22) in evaluating patients’ conditions. The therapists strongly agreed (72%; n = 16/22) that therapy trainees needed to increase their frequency of training; therefore believe that our initiative to develop an upper limb disorder training tool will help in improving the clinical education field (strongly agree and agree 63%; n = 14/22).Keywords: upper limb disorder, clinical education tool, inter/intra-raters variability, spasticity, modified Ashworth scale
Procedia PDF Downloads 3111144 Production of Oral Vowels by Chinese Learners of Portuguese: Problems and Didactic Implications
Authors: Adelina Castelo
Abstract:
The increasing number of learners of Portuguese as Foreign Language in China justifies the need to define the phonetic profile of these learners and to design didactic materials that are adjusted to their specific problems in pronunciation. Different aspects of this topic have been studied, but the production of oral vowels still needs to be investigated. This study aims: (i) to identify the problems the Chinese learners of Portuguese experience in the pronunciation of oral vowels; (ii) to discuss the didactic implications drawn from those problems. The participants were eight native speakers of Mandarin Chinese that had been learning Portuguese in College for almost a year. They named pictured objects and their oral productions were recorded and phonetically transcribed. The selection of the objects to name took into account some linguistic variables (e.g. stress pattern, syllable structure, presence of the Portuguese oral vowels in different word positions according to stress location). The results are analysed in two ways: the impact of linguistic variables on the success rate in the vowels' production; the replacement strategies used in the non-target productions. Both analyses show that the Chinese learners of Portuguese (i) have significantly more difficulties with the mid vowels as well as the high central vowel and (ii) do not master the vowel height feature. These findings contribute to define the phonetic profile of these learners in terms of oral vowel production. Besides, they have important didactic implications for the pronunciation teaching to these specific learners. Those implications are discussed and exemplified.Keywords: Chinese learners, learners’ phonetic profile, linguistic variables, Portuguese as foreign language, production data, pronunciation teaching, oral vowels
Procedia PDF Downloads 2251143 Graph-Based Semantical Extractive Text Analysis
Authors: Mina Samizadeh
Abstract:
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis
Procedia PDF Downloads 761142 The Impact of Data Science on Geography: A Review
Authors: Roberto Machado
Abstract:
We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.Keywords: data science, geography, systematic review, optimization algorithms, supervised learning
Procedia PDF Downloads 381141 Investor Sentiment and Satisfaction in Automated Investment: A Sentimental Analysis of Robo-Advisor Platforms
Authors: Vertika Goswami, Gargi Sharma
Abstract:
The rapid evolution of fintech has led to the rise of robo-advisor platforms that utilize artificial intelligence (AI) and machine learning to offer personalized investment solutions efficiently and cost-effectively. This research paper conducts a comprehensive sentiment analysis of investor experiences with these platforms, employing natural language processing (NLP) and sentiment classification techniques. The study investigates investor perceptions, engagement, and satisfaction, identifying key drivers of positive sentiment such as clear communication, low fees, consistent returns, and robust security. Conversely, negative sentiment is linked to issues like inconsistent performance, hidden fees, poor customer support, and a lack of transparency. The analysis reveals that addressing these pain points—through improved transparency, enhanced customer service, and ongoing technological advancements—can significantly boost investor trust and satisfaction. This paper contributes valuable insights into the fields of behavioral finance and fintech innovation, offering actionable recommendations for stakeholders, practitioners, and policymakers. Future research should explore the long-term impact of these factors on investor loyalty, the role of emerging technologies, and the effects of ethical investment choices and regulatory compliance on investor sentiment.Keywords: artificial intelligence in finance, automated investment, financial technology, investor satisfaction, investor sentiment, robo-advisors, sentimental analysis
Procedia PDF Downloads 241140 Pedagogy to Involve Research Process in an Undergraduate Physical Fitness Course: A Case Study
Authors: Indhumathi Gopal
Abstract:
Undergraduate research is well documented in Science, Technology, Engineering, and Mathematics (STEM), neurosciences, and microbiology disciplines, though it is hardly part of a physical fitness & wellness discipline. However, students need experiential learning opportunities, like internships and research assistantships, to get ahead with graduate schools and be gainfully employed. The first step towards this goal is to have students do a simple research project in a semester-long course. The value of research experiences and how to integrate research activity in a physical fitness & wellness course are discussed. The investigator looks into a mini research project, “Awareness of Obesity among College Students” and explains how to guide students through the research process, including journal search, data collection, and basic statistics. Besides, students will be introduced to the statistical package program SPSS 22.0 to assist with data evaluation. The lab component of the combined lecture-physical activity course could include the measurement of student’s weight with respect to their height to obtain body mass index (BMI). Students could categorize themselves in accordance with the World Health Organization’s guidelines. Results obtained after completing the data analysis help students be aware of their own potential health risks associated with overweight and obesity. Overweight and obesity are risk factors for hypertension, hypercholesterolemia, heart disease, stroke, diabetes, and certain types of cancer. It is hoped that this experience will get students interested in scientific studies, gain confidence, think critically, and develop problem-solving and good communication skills.Keywords: physical fitness, undergraduate research experience, obesity, BMI
Procedia PDF Downloads 841139 Bacterial Exposure and Microbial Activity in Dental Clinics during Cleaning Procedures
Authors: Atin Adhikari, Sushma Kurella, Pratik Banerjee, Nabanita Mukherjee, Yamini M. Chandana Gollapudi, Bushra Shah
Abstract:
Different sharp instruments, drilling machines, and high speed rotary instruments are routinely used in dental clinics during dental cleaning. Therefore, these cleaning procedures release a lot of oral microorganisms including bacteria in clinic air and may cause significant occupational bioaerosol exposure risks for dentists, dental hygienists, patients, and dental clinic employees. Two major goals of this study were to quantify volumetric airborne concentrations of bacteria and to assess overall microbial activity in this type of occupational environment. The study was conducted in several dental clinics of southern Georgia and 15 dental cleaning procedures were targeted for sampling of airborne bacteria and testing of overall microbial activity in settled dusts over clinic floors. For air sampling, a Biostage viable cascade impactor was utilized, which comprises an inlet cone, precision-drilled 400-hole impactor stage, and a base that holds an agar plate (Tryptic soy agar). A high-flow Quick-Take-30 pump connected to this impactor pulls microorganisms in air at 28.3 L/min flow rate through the holes (jets) where they are collected on the agar surface for approx. five minutes. After sampling, agar plates containing the samples were placed in an ice chest with blue ice and plates were incubated at 30±2°C for 24 to 72 h. Colonies were counted and converted to airborne concentrations (CFU/m3) followed by positive hole corrections. Most abundant bacterial colonies (selected by visual screening) were identified by PCR amplicon sequencing of 16S rRNA genes. For understanding overall microbial activity in clinic floors and estimating a general cleanliness of the clinic surfaces during or after dental cleaning procedures, ATP levels were determined in swabbed dust samples collected from 10 cm2 floor surfaces. Concentration of ATP may indicate both the cell viability and the metabolic status of settled microorganisms in this situation. An ATP measuring kit was used, which utilized standard luciferin-luciferase fluorescence reaction and a luminometer, which quantified ATP levels as relative light units (RLU). Three air and dust samples were collected during each cleaning procedure (at the beginning, during cleaning, and immediately after the procedure was completed (n = 45). Concentrations at the beginning, during, and after dental cleaning procedures were 671±525, 917±1203, and 899±823 CFU/m3, respectively for airborne bacteria and 91±101, 243±129, and 139±77 RLU/sample, respectively for ATP levels. The concentrations of bacteria were significantly higher than typical indoor residential environments. Although an increasing trend for airborne bacteria was observed during cleaning, the data collected at three different time points were not significantly different (ANOVA: p = 0.38) probably due to high standard deviations of data. The ATP levels, however, demonstrated a significant difference (ANOVA: p <0.05) in this scenario indicating significant change in microbial activity on floor surfaces during dental cleaning. The most common bacterial genera identified were: Neisseria sp., Streptococcus sp., Chryseobacterium sp., Paenisporosarcina sp., and Vibrio sp. in terms of frequencies of occurrences, respectively. The study concluded that bacterial exposure in dental clinics could be a notable occupational biohazard, and appropriate respiratory protections for the employees are urgently needed.Keywords: bioaerosols, hospital hygiene, indoor air quality, occupational biohazards
Procedia PDF Downloads 3181138 Review of Research on Effectiveness Evaluation of Technology Innovation Policy
Authors: Xue Wang, Li-Wei Fan
Abstract:
The technology innovation has become the driving force of social and economic development and transformation. The guidance and support of public policies is an important condition to promote the realization of technology innovation goals. Policy effectiveness evaluation is instructive in policy learning and adjustment. This paper reviews existing studies and systematically evaluates the effectiveness of policy-driven technological innovation. We used 167 articles from WOS and CNKI databases as samples to clarify the measurement of technological innovation indicators and analyze the classification and application of policy evaluation methods. In general, technology innovation input and technological output are the two main aspects of technological innovation index design, among which technological patents are the focus of research, the number of patents reflects the scale of technological innovation, and the quality of patents reflects the value of innovation from multiple aspects. As for policy evaluation methods, statistical analysis methods are applied to the formulation, selection and evaluation of the after-effect of policies to analyze the effect of policy implementation qualitatively and quantitatively. The bibliometric methods are mainly based on the public policy texts, discriminating the inter-government relationship and the multi-dimensional value of the policy. Decision analysis focuses on the establishment and measurement of the comprehensive evaluation index system of public policy. The economic analysis methods focus on the performance and output of technological innovation to test the policy effect. Finally, this paper puts forward the prospect of the future research direction.Keywords: technology innovation, index, policy effectiveness, evaluation of policy, bibliometric analysis
Procedia PDF Downloads 751137 Critical Design Futures: A Foresight 3.0 Approach to Business Transformation and Innovation
Authors: Nadya Patel, Jawn Lim
Abstract:
Foresight 3.0 is a synergistic methodology that encompasses systems analysis, future studies, capacity building, and forward planning. These components are interconnected, fostering a collective anticipatory intelligence that promotes societal resilience (Ravetz, 2020). However, traditional applications of these strands can often fall short, leading to missed opportunities and narrow perspectives. Therefore, Foresight 3.0 champions a holistic approach to tackling complex issues, focusing on systemic transformations and power dynamics. Businesses are pivotal in preparing the workforce for an increasingly uncertain and complex world. This necessitates the adoption of innovative tools and methodologies, such as Foresight 3.0, that can better equip young employees to anticipate and navigate future challenges. Firstly, the incorporation of its methodology into workplace training can foster a holistic perspective among employees. This approach encourages employees to think beyond the present and consider wider social, economic, and environmental contexts, thereby enhancing their problem-solving skills and resilience. This paper discusses our research on integrating Foresight 3.0's transformative principles with a newly developed Critical Design Futures (CDF) framework to equip organisations with the ability to innovate for the world's most complex social problems. This approach is grounded in 'collective forward intelligence,' enabling mutual learning, co-innovation, and co-production among a diverse stakeholder community, where business transformation and innovation are achieved.Keywords: business transformation, innovation, foresight, critical design
Procedia PDF Downloads 881136 Effect of the Incorporation of Modified Starch on the Physicochemical Properties and Consumer Acceptance of Puff Pastry
Authors: Alejandra Castillo-Arias, Santiago Amézquita-Murcia, Golber Carvajal-Lavi, Carlos M. Zuluaga-Domínguez
Abstract:
The intricate relationship between health and nutrition has driven the food industry to seek healthier and more sustainable alternatives. A key strategy currently employed is the reduction of saturated fats and the incorporation of ingredients that align with new consumer trends. Modified starch, a polysaccharide widely used in baking, also serves as a functional ingredient to boost dietary fiber content. However, its use in puff pastry remains challenging due to the technological difficulties in achieving a buttery pastry with the necessary strength to create thin, flaky layers. This study explored the potential of incorporating modified starch into puff pastry formulations. To evaluate the physicochemical properties of wheat flour mixed with modified starch, five different flour samples were prepared: T1, T2, T3, and T4, containing 10g, 20g, 30g, and 40g of modified starch per 100 g mixture, respectively, alongside a control sample (C) with no added starch. The analysis focused on various physicochemical indices, including the Water Absorption Index (WAI), Water Solubility Index (WSI), Swelling Power (SP), and Water Retention Capacity (WRC). The puff pastry was further characterized by color measurement and sensory analysis. For the preparation of the puff pastry dough, the flour, modified starch, and salt were mixed, followed by the addition of water until a homogenous dough was achieved. The margarine was later incorporated into the dough, which was folded and rolled multiple times to create the characteristic layers of puff pastry. The dough was then cut into equal pieces, baked at 170°C, and allowed to cool. The results indicated that the addition of modified starch did not significantly alter the specific volume or texture of the puff pastries, as reflected by the stable WAI and SP values across the samples. However, the WRC increased with higher starch content, highlighting the hydrophilic nature of the modified starch, which necessitated additional water during dough preparation. Color analysis revealed significant variations in the L* (lightness) and a* (red-green) parameters, with no consistent relationship between the modified starch treatments and the control. However, the b* (yellow-blue) parameter showed a strong correlation across most samples, except for treatment T3. Thus, modified starch affected the a* component of the CIELAB color spectrum, influencing the reddish hue of the puff pastries. Variations in baking time due to increased water content in the dough likely contributed to differences in lightness among the samples. Sensory analysis revealed that consumers preferred the sample with a 20% starch substitution (T2), which was rated similarly to the control in terms of texture. However, treatment T3 exhibited unusual behavior in texture analysis, and the color analysis showed that treatment T1 most closely resembled the control, indicating that starch addition is most noticeable to consumers in the visual aspect of the product. In conclusion, while the modified starch successfully maintained the desired texture and internal structure of puff pastry, its impact on water retention and color requires careful consideration in product formulation. This study underscores the importance of balancing product quality with consumer expectations when incorporating modified starches in baked goods.Keywords: consumer preferences, modified starch, physicochemical properties, puff pastry
Procedia PDF Downloads 331135 Brain-Computer Interface Based Real-Time Control of Fixed Wing and Multi-Rotor Unmanned Aerial Vehicles
Authors: Ravi Vishwanath, Saumya Kumaar, S. N. Omkar
Abstract:
Brain-computer interfacing (BCI) is a technology that is almost four decades old, and it was developed solely for the purpose of developing and enhancing the impact of neuroprosthetics. However, in the recent times, with the commercialization of non-invasive electroencephalogram (EEG) headsets, the technology has seen a wide variety of applications like home automation, wheelchair control, vehicle steering, etc. One of the latest developed applications is the mind-controlled quadrotor unmanned aerial vehicle. These applications, however, do not require a very high-speed response and give satisfactory results when standard classification methods like Support Vector Machine (SVM) and Multi-Layer Perceptron (MLPC). Issues are faced when there is a requirement for high-speed control in the case of fixed-wing unmanned aerial vehicles where such methods are rendered unreliable due to the low speed of classification. Such an application requires the system to classify data at high speeds in order to retain the controllability of the vehicle. This paper proposes a novel method of classification which uses a combination of Common Spatial Paradigm and Linear Discriminant Analysis that provides an improved classification accuracy in real time. A non-linear SVM based classification technique has also been discussed. Further, this paper discusses the implementation of the proposed method on a fixed-wing and VTOL unmanned aerial vehicles.Keywords: brain-computer interface, classification, machine learning, unmanned aerial vehicles
Procedia PDF Downloads 2861134 Priming through Open Book MCQ Test: A Tool for Enhancing Learning in Medical Undergraduates
Authors: Bharti Bhandari, Bharati Mehta, Sabyasachi Sircar
Abstract:
Medical education is advancing in India, with its advancement newer innovations are being incorporated in teaching and assessment methodology. Our study focusses on a teaching innovation that is more student-centric than teacher-centric and is the need of the day. The teaching innovation was carried out in 1st year MBBS students of our institute. Students were assigned control and test groups. Priming was done for the students in the test group with an open-book MCQ based test in a particular topic before delivering formal didactic lecture on that topic. The control group was not assigned any such exercise. This was followed by formal didactic lecture on the same topic. Thereafter, both groups were assessed on the same topic. The marks were compiled and analysed using appropriate statistical tests. Students were also given questionnaire to elicit their views on the benefits of “self-priming”. The mean marks scored in theory assessment by the test group were statistically higher than the marks scored by the controls. According to students’ feedback, the ‘self-priming “process was interesting, helped in better orientation during class-room lectures and better understanding of the topic. They want it to be repeated for other topics with moderate difficulty level. Better performance of the students in the primed group validates the combination of student-centric priming model and didactic lecture as superior to the conventional, teacher-centric methods alone. If this system is successfully followed, the present teacher-centric pedagogy should increasingly give way to student-centric activities where the teacher is only a facilitator.Keywords: medical education, open-book test, pedagogy, priming
Procedia PDF Downloads 4481133 Rectus Sheath Block to Extend the Effectiveness of Post Operative Epidural Analgesia
Authors: Sugam Kale, Arif Uzair Bin Mohammed Roslan, Cindy Lee, Syed Beevee Mohammed Ismail
Abstract:
Preemptive analgesia is an established concept in the modern practice of anaesthesia. To be most effective, it is best instituted earlier than the surgical stimulus and should last beyond the offset of surgically induced pain till healing is complete. Whereas the start of afferent pain blockade with regional anaesthesia is common, its effect often falls short to cover the entire period of pain impulses making their way to CNS in the post-operative period. We tried to use a combination of two regional anaesthetic techniques used sequentially to overcome this handicap. Madam S., a 56 year old lady, was scheduled for elective surgery for pancreatic cancer. She underwent laparotomy and distal pancreatectomy, splenectomy, bilateral salpingo oophorectomy, and sigmoid colectomy. Surgery was expected to be extensive, and it was presumed that the standard pain relief with PCA with opiates and oral analgesics would not be adequate. After counselling the patient pre-operative about the technique of regional anaesthesia techniques, including epidural catheterization and rectus sheath catheter placement, their benefits, and potential complications, informed consent was obtained. Epidural catheter was placed awake, and general anaesthesia was then induced. Epidural infusion of local anaesthetics was started prior to surgical incision and was continued till 60 hours into the postoperative period. Before skin closure, the surgeons inserted commercially available rectus sheath catheters bilaterally along the midline incision used for laparotomy. After 46 hours post-op, local anaesthetic infusion via these was started as bridging while the epidural infusion rate was tapered off. The epidural catheter was removed at 75 hours. Elastomeric pumps were used to provide local anaesthetic infusion with the ability to vary infusion rates. Acute pain service followed up the patient’s vital signs and effectiveness of pain relief twice daily or more frequently as required. Rectus sheath catheters were removed 137 hours post-op. The patient had good post-op analgesia with the minimal additional analgesic requirement. For the most part, the visual analog score (VAS) for pain remained at 1-3 on a scale of 1 to 10. Haemodynamics remained stable, and surgical recovery was as expected. Minimal opiate requirement after an extensive laparotomy also translates to the early return of intestinal motility. Our experience was encouraging, and we are hoping to extend this combination of two regional anaesthetic techniques to patients undergoing similar surgeries. Epidural analgesia is denser and offers excellent pain relief for both visceral and somatic pain in the first few days after surgery. As the pain intensity grows weaker, rectus sheath block and oral analgesics provide almost the same degree of pain relief after the epidural catheter is removed. We discovered that the background infusion of local anaesthetic down the rectus sheath catherter largely reduced the requirement for other classes of analgesics. We aim to study this further with a larger patient cohort and hope that it may become an established clinical practice that benefits patients everywhere.Keywords: rectus sheath, epidural infusion, post operative analgesia, elastomeric
Procedia PDF Downloads 1381132 A Study of Patriotism through History Education in Primary School
Authors: Abdul Razak Bin Ahmad, Mohd Mahzan Awang
Abstract:
Appreciation of patriotism value is important for every student to be able to become a quality citizen and good for the country. Realizing this situation, Malaysia has introduced history education for primary school students since 2014. One of the aims is to provide basic knowledge on patriotism as well as to promote patriotic behaviour among school pupils. In order to examine the relationship between the students’ knowledge and their behaviour, a survey study was carried out. A set of questionnaire was designed and developed based prior studies on history education and patriotism. The sample of this survey was 153 primary school students aged 12 years old (Standard Six). Data collected and analysed using SPSS (Statistical Package for The Social Science 20.0). The results showed that the level of knowledge and patriotism practise at the moderate levels. Inferential statistic results revealed that there is no significant difference between genders with regards to patriotism knowledge and patriotism practice through history education subject. Results also demonstrated that there is a significant relationship between knowledge and the practice of patriotism values among the students. This means that knowledge on patriotism is important for promoting patriotic behaviour and practice in primary schools. This study implies that teaching students to understand and comprehend the concept of patriotism is vital to promote patriotic behaviour among students. Therefore, teachers should master pedagogical skills and good content knowledge on patriotism as mechanisms to promote effective learning in history education subjects. creativity in teaching history education subjects is also needed.Keywords: history education, knowledge, primary school, patriotism values, teachers
Procedia PDF Downloads 3831131 Composite Approach to Extremism and Terrorism Web Content Classification
Authors: Kolade Olawande Owoeye, George Weir
Abstract:
Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.Keywords: sentiposit, classification, extremism, terrorism
Procedia PDF Downloads 2821130 Being a Lay Partner in Jesuit Higher Education in the Philippines: A Grounded Theory Application
Authors: Janet B. Badong-Badilla
Abstract:
In Jesuit universities, laypersons, who come from the same or different faith backgrounds or traditions, are considered as collaborators in mission. The Jesuits themselves support the contributions of the lay partners in realizing the mission of the Society of Jesus and recognize the important role that they play in education. This study aims to investigate and generate particular notions and understandings of lived experiences of being a lay partner in Jesuit universities in the Philippines, particularly those involved in higher education. Using the qualitative approach as introduced by grounded theorist Barney Glaser, the lay partners’ concept of being a partner, as lived in higher education, is generated systematically from the data collected in the field primarily through in-depth interviews, field notes and observations. Glaser’s constant comparative method of analysis of data is used going through the phases of open coding, theoretical coding, and selective coding from memoing to theoretical sampling to sorting and then writing. In this study, Glaser’s grounded theory as a methodology will provide a substantial insight into and articulation of the layperson’s actual experience of being a partner of the Jesuits in education. Such articulation provides a phenomenological approach or framework to an understanding of the meaning and core characteristics of Jesuit-Lay partnership in Jesuit educational institution of higher learning in the country. This study is expected to provide a framework or model for lay partnership in academic institutions that have the same practice of having lay partners in mission.Keywords: grounded theory, Jesuit mission in higher education, lay partner, lived experience
Procedia PDF Downloads 1651129 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark
Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos
Abstract:
This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark
Procedia PDF Downloads 1231128 Voice Liveness Detection Using Kolmogorov Arnold Networks
Authors: Arth J. Shah, Madhu R. Kamble
Abstract:
Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection
Procedia PDF Downloads 47