Search results for: quest based learning
29007 A Taxonomy of the Informational Content of Virtual Heritage Serious Games
Authors: Laurence C. Hanes, Robert J. Stone
Abstract:
Video games have reached a point of huge commercial success as well as wide familiarity with audiences both young and old. Much attention and research have also been directed towards serious games and their potential learning affordances. It is little surprise that the field of virtual heritage has taken a keen interest in using serious games to present cultural heritage information to users, with applications ranging from museums and cultural heritage institutions, to academia and research, to schools and education. Many researchers have already documented their efforts to develop and distribute virtual heritage serious games. Although attempts have been made to create classifications of the different types of virtual heritage games (somewhat akin to the idea of game genres), no formal taxonomy has yet been produced to define the different types of cultural heritage and historical information that can be presented through these games at a content level, and how the information can be manifested within the game. This study proposes such a taxonomy. First the informational content is categorized as heritage or historical, then further divided into tangible, intangible, natural, and analytical. Next, the characteristics of the manifestation within the game are covered. The means of manifestation, level of demonstration, tone, and focus are all defined and explained. Finally, the potential learning outcomes of the content are discussed. A demonstration of the taxonomy is then given by describing the informational content and corresponding manifestations within several examples of virtual heritage serious games as well as commercial games. It is anticipated that this taxonomy will help designers of virtual heritage serious games to think about and clearly define the information they are presenting through their games, and how they are presenting it. Another result of the taxonomy is that it will enable us to frame cultural heritage and historical information presented in commercial games with a critical lens, especially where there may not be explicit learning objectives. Finally, the results will also enable us to identify shared informational content and learning objectives between any virtual heritage serious and/or commercial games.Keywords: informational content, serious games, taxonomy, virtual heritage
Procedia PDF Downloads 36729006 An Initiative for Improving Pre-Service Teachers’ Pedagogical Content Knowledge in Mathematics
Authors: Taik Kim
Abstract:
Mathematics anxiety has an important consequence for teacher practices that influence students’ attitudes and achievement. Elementary prospective teachers have the highest levels of mathematics anxiety in comparison with other college majors. In his teaching practice, the researcher developed a highly successful teaching model to reduce pre-service teachers’ higher math anxiety and simultaneously to improve their pedagogical math content knowledge. There were eighty one participants from 2015 to 2018 who took the Mathematics for Elementary Teachers I and II. As the analysis data indicated, elementary prospective teachers’ math anxiety was greatly reduced with improving their math pedagogical knowledge. U.S encounters a critical shortage of well qualified educators. To solve the issue, it is essential to engage students in a long-term commitmentto shape better teachers, who will, in turn, produce k-12 school students that are better-prepared for college students. It is imperative that new instructional strategies are implemented to improve student learning and address declining interest, poor preparedness, a lack of diverse representation, and low persistence of students in mathematics. Many four year college students take math courses from the math department in the College of Arts& Science and then take methodology courses from the College of Education. Before taking pedagogy, many students struggle in learning mathematics and lose their confidence. Since the content course focus on college level math, instead of pre service teachers’ teaching area, per se elementary math, they do not have a chance to improve their teaching skills on topics which eventually they teach. The research, a joint appointment of math and math education, has been involved in teaching content and pedagogy. As the result indicated, participants were able to math content at the same time how to teach. In conclusion, the new initiative to use several teaching strategies was able not only to increase elementary prospective teachers’ mathematical skills and knowledge but also to improve their attitude toward mathematics. We need an innovative teaching strategy which implements evidence-based tactics in redesigning a education and math to improve pre service teachers’math skills and which can improve students’ attitude toward math and students’ logical and reasoning skills. Implementation of these best practices in the local school district is particularly important because K-8 teachers are not generally familiar with lab-based instruction. At the same time, local school teachers will learn a new way how to teach math. This study can be a vital teacher education model expanding throughout the State and nationwide. In summary, this study yields invaluable information how to improve teacher education in the elementary level and, eventually, how to enhance K-8 students’ math achievement.Keywords: quality of education and improvement method, teacher education, innovative teaching and learning methodologies, math education
Procedia PDF Downloads 10429005 Comparing the Detection of Autism Spectrum Disorder within Males and Females Using Machine Learning Techniques
Authors: Joseph Wolff, Jeffrey Eilbott
Abstract:
Autism Spectrum Disorders (ASD) are a spectrum of social disorders characterized by deficits in social communication, verbal ability, and interaction that can vary in severity. In recent years, researchers have used magnetic resonance imaging (MRI) to help detect how neural patterns in individuals with ASD differ from those of neurotypical (NT) controls for classification purposes. This study analyzed the classification of ASD within males and females using functional MRI data. Functional connectivity (FC) correlations among brain regions were used as feature inputs for machine learning algorithms. Analysis was performed on 558 cases from the Autism Brain Imaging Data Exchange (ABIDE) I dataset. When trained specifically on females, the algorithm underperformed in classifying the ASD subset of our testing population. Although the subject size was relatively smaller in the female group, the manual matching of both male and female training groups helps explain the algorithm’s bias, indicating the altered sex abnormalities in functional brain networks compared to typically developing peers. These results highlight the importance of taking sex into account when considering how generalizations of findings on males with ASD apply to females.Keywords: autism spectrum disorder, machine learning, neuroimaging, sex differences
Procedia PDF Downloads 20929004 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.Keywords: calibration, data modeling, industrial processes, machine learning
Procedia PDF Downloads 29929003 Managing the Cognitive Load of Medical Students during Anatomy Lecture
Authors: Siti Nurma Hanim Hadie, Asma’ Hassan, Zul Izhar Ismail, Ahmad Fuad Abdul Rahim, Mohd. Zarawi Mat Nor, Hairul Nizam Ismail
Abstract:
Anatomy is a medical subject, which contributes to high cognitive load during learning. Despite its complexity, anatomy remains as the most important basic sciences subject with high clinical relevancy. Although anatomy knowledge is required for safe practice, many medical students graduated without having sufficient knowledge. In fact, anatomy knowledge among the medical graduates was reported to be declining and this had led to various medico-legal problems. Applying cognitive load theory (CLT) in anatomy teaching particularly lecture would be able to address this issue since anatomy information is often perceived as cognitively challenging material. CLT identifies three types of loads which are intrinsic, extraneous and germane loads, which combine to form the total cognitive load. CLT describe that learning can only occur when the total cognitive load does not exceed human working memory capacity. Hence, managing these three types of loads with the aim of optimizing the working memory capacity would be beneficial to the students in learning anatomy and retaining the knowledge for future application.Keywords: cognitive load theory, intrinsic load, extraneous load, germane load
Procedia PDF Downloads 46729002 Synergizing Additive Manufacturing and Artificial Intelligence: Analyzing and Predicting the Mechanical Behavior of 3D-Printed CF-PETG Composites
Authors: Sirine Sayed, Mostapha Tarfaoui, Abdelmalek Toumi, Youssef Qarssis, Mohamed Daly, Chokri Bouraoui
Abstract:
This paper delves into the combination of additive manufacturing (AM) and artificial intelligence (AI) to solve challenges related to the mechanical behavior of AM-produced parts. The article highlights the fundamentals and benefits of additive manufacturing, including creating complex geometries, optimizing material use, and streamlining manufacturing processes. The paper also addresses the challenges associated with additive manufacturing, such as ensuring stable mechanical performance and material properties. The role of AI in improving the static behavior of AM-produced parts, including machine learning, especially the neural network, is to make regression models to analyze the large amounts of data generated during experimental tests. It investigates the potential synergies between AM and AI to achieve enhanced functions and personalized mechanical properties. The mechanical behavior of parts produced using additive manufacturing methods can be further improved using design optimization, structural analysis, and AI-based adaptive manufacturing. The article concludes by emphasizing the importance of integrating AM and AI to enhance mechanical operations, increase reliability, and perform advanced functions, paving the way for innovative applications in different fields.Keywords: additive manufacturing, mechanical behavior, artificial intelligence, machine learning, neural networks, reliability, advanced functionalities
Procedia PDF Downloads 1129001 The Impact of Professional Development on Teachers’ Instructional Practice
Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier
Abstract:
Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.Keywords: teacher learning, professional development, pedagogical content knowledge, geometry
Procedia PDF Downloads 16929000 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments
Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz
Abstract:
Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.Keywords: LSTMs, streamflow, hyperparameters, hydrology
Procedia PDF Downloads 7028999 Domain Adaptation Save Lives - Drowning Detection in Swimming Pool Scene Based on YOLOV8 Improved by Gaussian Poisson Generative Adversarial Network Augmentation
Authors: Simiao Ren, En Wei
Abstract:
Drowning is a significant safety issue worldwide, and a robust computer vision-based alert system can easily prevent such tragedies in swimming pools. However, due to domain shift caused by the visual gap (potentially due to lighting, indoor scene change, pool floor color etc.) between the training swimming pool and the test swimming pool, the robustness of such algorithms has been questionable. The annotation cost for labeling each new swimming pool is too expensive for mass adoption of such a technique. To address this issue, we propose a domain-aware data augmentation pipeline based on Gaussian Poisson Generative Adversarial Network (GP-GAN). Combined with YOLOv8, we demonstrate that such a domain adaptation technique can significantly improve the model performance (from 0.24 mAP to 0.82 mAP) on new test scenes. As the augmentation method only require background imagery from the new domain (no annotation needed), we believe this is a promising, practical route for preventing swimming pool drowning.Keywords: computer vision, deep learning, YOLOv8, detection, swimming pool, drowning, domain adaptation, generative adversarial network, GAN, GP-GAN
Procedia PDF Downloads 10128998 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 15028997 Virtual Science Laboratory (ViSLab): The Effects of Visual Signalling Principles towards Students with Different Spatial Ability
Authors: Ai Chin Wong, Wan Ahmad Jaafar Wan Yahaya, Balakrishnan Muniandy
Abstract:
This study aims to explore the impact of Virtual Reality (VR) using visual signaling principles in learning about the science laboratory safety guide; this study involves students with different spatial ability. There are two types of science laboratory safety lessons, which are Virtual Reality with Signaling (VRS) and Virtual Reality Non Signaling (VRNS). This research has adopted a 2 x 2 quasi-experimental factorial design. There are two types of variables involved in this research. The two modes of courseware form the independent variables with the spatial ability as the moderator variable. The dependent variable is the students’ performance. This study sample consisted of 141 students. Descriptive and inferential statistics were conducted to analyze the collected data. The major effects and the interaction effects of the independent variables on the independent variable were explored using the Analyses of Covariance (ANCOVA). Based on the findings of this research, the results exhibited low spatial ability students in VRS outperformed their counterparts in VRNS. However, there was no significant difference in students with high spatial ability using VRS and VRNS. Effective learning in students with different spatial ability can be boosted by implementing the Virtual Reality with Signaling (VRS) in the design as well as the development of Virtual Science Laboratory (ViSLab).Keywords: spatial ability, science laboratory safety, visual signaling principles, virtual reality
Procedia PDF Downloads 25728996 Monitor Student Concentration Levels on Online Education Sessions
Authors: M. K. Wijayarathna, S. M. Buddika Harshanath
Abstract:
Monitoring student engagement has become a crucial part of the educational process and a reliable indicator of the capacity to retain information. As online learning classrooms are now more common these days, students' attention levels have become increasingly important, making it more difficult to check each student's concentration level in an online classroom setting. To profile student attention to various gradients of engagement, a study is a plan to conduct using machine learning models. Using a convolutional neural network, the findings and confidence score of the high accuracy model are obtained. In this research, convolutional neural networks are using to help discover essential emotions that are critical in defining various levels of participation. Students' attention levels were shown to be influenced by emotions such as calm, enjoyment, surprise, and fear. An improved virtual learning system was created as a result of these data, which allowed teachers to focus their support and advise on those students who needed it. Student participation has formed as a crucial component of the learning technique and a consistent predictor of a student's capacity to retain material in the classroom. Convolutional neural networks have a plan to implement the platform. As a preliminary step, a video of the pupil would be taken. In the end, researchers used a convolutional neural network utilizing the Keras toolkit to take pictures of the recordings. Two convolutional neural network methods are planned to use to determine the pupils' attention level. Finally, those predicted student attention level results plan to display on the graphical user interface of the System.Keywords: HTML5, JavaScript, Python flask framework, AI, graphical user
Procedia PDF Downloads 10028995 Effectiveness of the Model in the Development of Teaching Materials for Malay Language in Primary Schools in Singapore
Authors: Salha Mohamed Hussain
Abstract:
As part of the review on the Malay Language curriculum and pedagogy in Singapore conducted in 2010, some recommendations were made to nurture active learners who are able to use the Malay Language efficiently in their daily lives. In response to the review, a new Malay Language teaching and learning package for primary school, called CEKAP (Cungkil – Elicit; Eksplorasi – Exploration; Komunikasi – Communication; Aplikasi – Application; Penilaian – Assessment), was developed from 2012 and implemented for Primary 1 in all primary schools from 2015. Resources developed in this package include the text book, activity book, teacher’s guide, big books, small readers, picture cards, flash cards, a game kit and Information and Communication Technology (ICT) resources. The development of the CEKAP package is continuous until 2020. This paper will look at a model incorporated in the development of the teaching materials in the new Malay Language Curriculum for Primary Schools and the rationale for each phase of development to ensure that the resources meet the needs of every pupil in the teaching and learning of Malay Language in the primary schools. This paper will also focus on the preliminary findings of the effectiveness of the model based on the feedback given by members of the working and steering committees. These members are academicians and educators who were appointed by the Ministry of Education to provide professional input on the soundness of pedagogical approach proposed in the revised syllabus and to make recommendations on the content of the new instructional materials. Quantitative data is derived from the interviews held with these members to gather their input on the model. Preliminary findings showed that the members provided positive feedback on the model and that the comprehensive process has helped to develop good and effective instructional materials for the schools. Some recommendations were also gathered from the interview sessions. This research hopes to provide useful information to those involved in the planning of materials development for teaching and learning.Keywords: Malay language, materials development, model, primary school
Procedia PDF Downloads 11228994 Hull Detection from Handwritten Digit Image
Authors: Sriraman Kothuri, Komal Teja Mattupalli
Abstract:
In this paper we proposed a novel algorithm for recognizing hulls in a hand written digits. This is an extension to the work on “Digit Recognition Using Freeman Chain code”. In order to find out the hulls in a user given digit it is necessary to follow three steps. Those are pre-processing, Boundary Extraction and at last apply the Hull Detection system in a way to attain the better results. The detection of Hull Regions is mainly intended to increase the machine learning capability in detection of characters or digits. This can also extend this in order to get the hull regions and their intensities in Black Holes in Space Exploration.Keywords: chain code, machine learning, hull regions, hull recognition system, SASK algorithm
Procedia PDF Downloads 40028993 AI-based Radio Resource and Transmission Opportunity Allocation for 5G-V2X HetNets: NR and NR-U Networks
Authors: Farshad Zeinali, Sajedeh Norouzi, Nader Mokari, Eduard Jorswieck
Abstract:
The capacity of fifth-generation (5G) vehicle-to-everything (V2X) networks poses significant challenges. To ad- dress this challenge, this paper utilizes New Radio (NR) and New Radio Unlicensed (NR-U) networks to develop a heterogeneous vehicular network (HetNet). We propose a new framework, named joint BS assignment and resource allocation (JBSRA) for mobile V2X users and also consider coexistence schemes based on flexible duty cycle (DC) mechanism for unlicensed bands. Our objective is to maximize the average throughput of vehicles while guaranteeing the WiFi users' throughput. In simulations based on deep reinforcement learning (DRL) algorithms such as deep deterministic policy gradient (DDPG) and deep Q network (DQN), our proposed framework outperforms existing solutions that rely on fixed DC or schemes without consideration of unlicensed bands.Keywords: vehicle-to-everything (V2X), resource allocation, BS assignment, new radio (NR), new radio unlicensed (NR-U), coexistence NR-U and WiFi, deep deterministic policy gradient (DDPG), deep Q-network (DQN), joint BS assignment and resource allocation (JBSRA), duty cycle mechanism
Procedia PDF Downloads 10328992 Self-Evaluation of the Foundation English Language Programme at the Center for Preparatory Studies Offered at the Sultan Qaboos University, Oman: Process and Findings
Authors: Meenalochana Inguva
Abstract:
The context: The Center for Preparatory study is one of the strongest and most vibrant academic teaching units of the Sultan Qaboos University (SQU). The Foundation Programme English Language (FPEL) is part of a larger foundation programme which was implemented at SQU in fall 2010. The programme has been designed to prepare the students who have been accepted to study in the university in order to achieve the required educational goals (the learning outcomes) that have been designed according to Oman Academic Standards and published by the Omani Authority for Academic Accreditation (OAAA) for the English language component. The curriculum: At the CPS, the English language curriculum is based on the learning outcomes drafted for each level. These learning outcomes guide the students in meeting what is expected of them by the end of each level. These six levels are progressive in nature and are seen as a continuum. The study: A periodic evaluation of language programmes is necessary to improve the quality of the programmes and to meet the set goals of the programmes. An evaluation may be carried out internally or externally depending on the purpose and context. A self-study programme was initiated at the beginning of spring semester 2015 with a team comprising a total of 11 members who worked with-in the assigned course areas (level and programme specific). Only areas specific to FPEL have been included in the study. The study was divided into smaller tasks and members focused on their assigned courses. The self-study primarily focused on analyzing the programme LOs, curriculum planning, materials used and their relevance against the GFP exit standards. The review team also reflected on the assessment methods and procedures followed to reflect on student learning. The team has paid attention to having standard criteria for assessment and transparency in procedures. A special attention was paid to the staging of LOs across levels to determine students’ language and study skills ability to cope with higher level courses. Findings: The findings showed that most of the LOs are met through the materials used for teaching. Students score low on objective tests and high on subjective tests. Motivated students take advantage of academic support activities others do not utilize the student support activities to their advantage. Reading should get more hours. In listening, the format of the listening materials in CT 2 does not match the test format. Some of the course materials need revision. For e.g. APA citation, referencing etc. No specific time is allotted for teaching grammar Conclusion: The findings resulted in taking actions in bridging gaps. It will also help the center to be better prepared for the external review of its FPEL curriculum. It will also provide a useful base to prepare for the self-study portfolio for GFP standards assessment and future audit.Keywords: curriculum planning, learning outcomes, reflections, self-evaluation
Procedia PDF Downloads 22628991 Exploring Equity and Inclusion in the Context of Distance Education Using a Social Location Perspective
Authors: Boadi Agyekum
Abstract:
In this study, a social location perspective is used to explore the challenges of creating opportunities that will foster lifelong education, inclusion, and equity for residents of rural communities in Ghana. The differentiated experiences of rural adults are under-researched and often unacknowledged in lifelong education literature and distance education policy. There is a need to examine carefully the structural inequalities that create disadvantages for residents of rural communities and women in pursuing distance education in designated cities in Ghana. The paper uses in-depth interviews to explore participants’ experiences of learning at a distance and to scrutinise the narratives of lifelong education. The paper reflects on the implications of the framework employed for educators and social justice in lifelong education. It further recommends the need to provide IT laboratories and fully online programs that would require stable and regular internet and access to ICT equipment for potential learning in rural communities. The social location approach presented a number of axes of diversity as comparatively more important than others; these included gender, age, education, work commitment, geography, and degree of social connectedness. This can inform lifelong education policy and programs to sustain quality education.Keywords: equity, distance education, lifelong learning, social location, intersectionality, rural communities
Procedia PDF Downloads 10128990 Investigating the performance of machine learning models on PM2.5 forecasts: A case study in the city of Thessaloniki
Authors: Alexandros Pournaras, Anastasia Papadopoulou, Serafim Kontos, Anastasios Karakostas
Abstract:
The air quality of modern cities is an important concern, as poor air quality contributes to human health and environmental issues. Reliable air quality forecasting has, thus, gained scientific and governmental attention as an essential tool that enables authorities to take proactive measures for public safety. In this study, the potential of Machine Learning (ML) models to forecast PM2.5 at local scale is investigated in the city of Thessaloniki, the second largest city in Greece, which has been struggling with the persistent issue of air pollution. ML models, with proven ability to address timeseries forecasting, are employed to predict the PM2.5 concentrations and the respective Air Quality Index 5-days ahead by learning from daily historical air quality and meteorological data from 2014 to 2016 and gathered from two stations with different land use characteristics in the urban fabric of Thessaloniki. The performance of the ML models on PM2.5 concentrations is evaluated with common statistical methods, such as R squared (r²) and Root Mean Squared Error (RMSE), utilizing a portion of the stations’ measurements as test set. A multi-categorical evaluation is utilized for the assessment of their performance on respective AQIs. Several conclusions were made from the experiments conducted. Experimenting on MLs’ configuration revealed a moderate effect of various parameters and training schemas on the model’s predictions. Their performance of all these models were found to produce satisfactory results on PM2.5 concentrations. In addition, their application on untrained stations showed that these models can perform well, indicating a generalized behavior. Moreover, their performance on AQI was even better, showing that the MLs can be used as predictors for AQI, which is the direct information provided to the general public.Keywords: Air Quality, AQ Forecasting, AQI, Machine Learning, PM2.5
Procedia PDF Downloads 7728989 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box
Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar
Abstract:
To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection
Procedia PDF Downloads 13028988 The Use of Authentic Materials in the Chinese Language Classroom
Authors: Yiwen Jin, Jing Xiao, Pinfang Su
Abstract:
The idea of adapting authentic materials in language teaching is from the communicative method in the 1970s. Different from the language in language textbooks, authentic materials is not deliberately written, it is from the native speaker’s real life and contains real information, which can meet social needs. It could improve learners ' interest, create authentic context and improve learners ' communicative competence. Authentic materials play an important role in CFL(Chinese as a foreign language) classroom. Different types of authentic materials can be used in different ways during learning and teaching. Because of the COVID-19 pandemic,a lot of Chinese learners are learning Chinese without the real language environment. Although there are some well-written textbooks, there is a certain distance between textbook language materials and daily life. Learners cannot automatically fill this gap. That is why it is necessary to apply authentic materials as a supplement to the language textbook to create the real context. Chinese teachers around the world are working together, trying to integrate the resources and apply authentic materials through different approach. They apply authentic materials in the form of new textbooks, manuals, apps and short videos they collect and create to help Chinese learning and teaching. A review of previous research on authentic materials and the Chinese teachers’ attempt to adapt it in the classroom are offered in this manuscript.Keywords: authentic materials, Chinese as a second language, developmental use of digital resources, materials development for language teaching
Procedia PDF Downloads 17428987 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 4128986 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 14428985 Analyse of User Interface Design in Mobile Teaching Apps
Authors: Asma Ashoul
Abstract:
Nowadays, smartphones are playing a major role in our lives, by communicating with family, friends or using them to learn different things in life. Using smartphones to learn and teach today is something common to see in places like schools or colleges. Therefore, thinking about developing an app that teaches Arabic language may help some categories in society to learn a second language. For example, kids under the age of five or older would learn fast by using smartphones. The problem is based on the Arabic language, which is most like to be not used anymore. The developer assumed to develop an app that would help the younger generation on their learning the Arabic language. A research was completed about user interface design to help the developer choose appropriate layouts and designs. Developing the artefact contained different stages. First, analyzing the requirements with the client, which is needed to be developed. Secondly, designing the user interface design based on the literature review. Thirdly, developing and testing the application after it is completed contacting all the tools that have been used. Lastly, evaluation and future recommendation, which contained the overall view about the application followed by the client’s feedback. Gathering the requirements after having client meetings based on the interface design. The project was done following an agile development methodology. Therefore, this methodology helped the developer to manage to finish the work on time.Keywords: developer, application, interface design, layout, Agile, client
Procedia PDF Downloads 11628984 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students
Authors: Kavita Goel, Donald Winchester
Abstract:
In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.Keywords: cognitive load theory, learning style, instructional environment, working memory
Procedia PDF Downloads 14528983 Developing a Framework for Open Source Software Adoption in a Higher Education Institution in Uganda. A case of Kyambogo University
Authors: Kafeero Frank
Abstract:
This study aimed at developing a frame work for open source software adoption in an institution of higher learning in Uganda, with the case of KIU as a study area. There were mainly four research questions based on; individual staff interaction with open source software forum, perceived FOSS characteristics, organizational characteristics and external characteristics as factors that affect open source software adoption. The researcher used causal-correlation research design to study effects of these variables on open source software adoption. A quantitative approach was used in this study with self-administered questionnaire on a purposively and randomly sampled sample of university ICT staff. Resultant data was analyzed using means, correlation coefficients and multivariate multiple regression analysis as statistical tools. The study reveals that individual staff interaction with open source software forum and perceived FOSS characteristics were the primary factors that significantly affect FOSS adoption while organizational and external factors were secondary with no significant effect but significant correlation to open source software adoption. It was concluded that for effective open source software adoption to occur there must be more effort on primary factors with subsequent reinforcement of secondary factors to fulfill the primary factors and adoption of open source software. Lastly recommendations were made in line with conclusions for coming up with Kyambogo University frame work for open source software adoption in institutions of higher learning. Areas of further research recommended include; Stakeholders’ analysis of open source software adoption in Uganda; Challenges and way forward. Evaluation of Kyambogo University frame work for open source software adoption in institutions of higher learning. Framework development for cloud computing adoption in Ugandan universities. Framework for FOSS development in Uganda IT industryKeywords: open source software., organisational characteristics, external characteristics, cloud computing adoption
Procedia PDF Downloads 7228982 The Implications in the Use of English as the Medium of Instruction in Business Management Courses at Vavuniya Campus
Authors: Jeyaseelan Gnanaseelan, Subajana Jeyaseelan
Abstract:
The paper avails, in a systemic form, some of the results of the investigation into nature, functions, problems, and implications in the use of English as the medium of Instruction (EMI) in the Business Management courses at Vavuniya Campus of the University of Jaffna, located in the conflict-affected northern part of Sri Lanka. It is a case study of the responses of the students and the teachers from Tamil and Sinhala language communities of the Faculty of Business Studies. This paper analyzes the perceptions on the use of the medium, the EMI background, resources available and accessible, language abilities of the teachers and learners, learning style and pedagogy, the EMI methodology, the socio-economic and socio-political contexts typical of a non-native English learning context. The analysis is quantitative and qualitative. It finds out the functional perspective of the EMI in Sri Lanka and suggests practical strategies of contextualization and acculturation in the EMI organization and positions. The paper assesses the learner and teacher capacity in the use of English. The ethnic conflict and linguistic politics in Sri Lanka have contributed multiple factors to the current use of English as the medium. It has conflicted with its domestic realities and the globalization trends of the world at large which determines efficiency and effectiveness.Keywords: medium of instruction, English, business management, teaching and learning
Procedia PDF Downloads 12728981 A Game-Based Product Modelling Environment for Non-Engineer
Authors: Guolong Zhong, Venkatesh Chennam Vijay, Ilias Oraifige
Abstract:
In the last 20 years, Knowledge Based Engineering (KBE) has shown its advantages in product development in different engineering areas such as automation, mechanical, civil and aerospace engineering in terms of digital design automation and cost reduction by automating repetitive design tasks through capturing, integrating, utilising and reusing the existing knowledge required in various aspects of the product design. However, in primary design stages, the descriptive information of a product is discrete and unorganized while knowledge is in various forms instead of pure data. Thus, it is crucial to have an integrated product model which can represent the entire product information and its associated knowledge at the beginning of the product design. One of the shortcomings of the existing product models is a lack of required knowledge representation in various aspects of product design and its mapping to an interoperable schema. To overcome the limitation of the existing product model and methodologies, two key factors are considered. First, the product model must have well-defined classes that can represent the entire product information and its associated knowledge. Second, the product model needs to be represented in an interoperable schema to ensure a steady data exchange between different product modelling platforms and CAD software. This paper introduced a method to provide a general product model as a generative representation of a product, which consists of the geometry information and non-geometry information, through a product modelling framework. The proposed method for capturing the knowledge from the designers through a knowledge file provides a simple and efficient way of collecting and transferring knowledge. Further, the knowledge schema provides a clear view and format on the data that needed to be gathered in order to achieve a unified knowledge exchange between different platforms. This study used a game-based platform to make product modelling environment accessible for non-engineers. Further the paper goes on to test use case based on the proposed game-based product modelling environment to validate the effectiveness among non-engineers.Keywords: game-based learning, knowledge based engineering, product modelling, design automation
Procedia PDF Downloads 15428980 Application of Granular Computing Paradigm in Knowledge Induction
Authors: Iftikhar U. Sikder
Abstract:
This paper illustrates an application of granular computing approach, namely rough set theory in data mining. The paper outlines the formalism of granular computing and elucidates the mathematical underpinning of rough set theory, which has been widely used by the data mining and the machine learning community. A real-world application is illustrated, and the classification performance is compared with other contending machine learning algorithms. The predictive performance of the rough set rule induction model shows comparative success with respect to other contending algorithms.Keywords: concept approximation, granular computing, reducts, rough set theory, rule induction
Procedia PDF Downloads 53128979 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings
Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi
Abstract:
Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden
Procedia PDF Downloads 8528978 The Strategic Importance of Technology in the International Production: Beyond the Global Value Chains Approach
Authors: Marcelo Pereira Introini
Abstract:
The global value chains (GVC) approach contributes to a better understanding of the international production organization amid globalization’s second unbundling from the 1970s on. Mainly due to the tools that help to understand the importance of critical competences, technological capabilities, and functions performed by each player, GVC research flourished in recent years, rooted in discussing the possibilities of integration and repositioning along regional and global value chains. Regarding this context, part of the literature endorsed a more optimistic view that engaging in fragmented production networks could represent learning opportunities for developing countries’ firms, since the relationship with transnational corporations could allow them build skills and competences. Increasing recognition that GVCs are based on asymmetric power relations provided another sight about benefits, costs, and development possibilities though. Once leading companies tend to restrict the replication of their technologies and capabilities by their suppliers, alternative strategies beyond the functional specialization, seen as a way to integrate value chains, began to be broadly highlighted. This paper organizes a coherent narrative about the shortcomings of the GVC analytical framework, while recognizing its multidimensional contributions and recent developments. We adopt two different and complementary perspectives to explore the idea of integration in the international production. On one hand, we emphasize obstacles beyond production components, analyzing the role played by intangible assets and intellectual property regimes. On the other hand, we consider the importance of domestic production and innovation systems for technological development. In order to provide a deeper understanding of the restrictions on technological learning of developing countries’ firms, we firstly build from the notion of intellectual monopoly to analyze how flagship companies can prevent subordinated firms from improving their positions in fragmented production networks. Based on intellectual property protection regimes we discuss the increasing asymmetries between these players and the decreasing access of part of them to strategic intangible assets. Second, we debate the role of productive-technological ecosystems and of interactive and systemic technological development processes, as concepts of the Innovation Systems approach. Supporting the idea that not only endogenous advantages are important for international competition of developing countries’ firms, but also that the building of these advantages itself can be a source of technological learning, we focus on local efforts as a crucial element, which is not replaceable for technology imported from abroad. Finally, the paper contributes to the discussion about technological development as a two-dimensional dynamic. If GVC analysis tends to underline a company-based perspective, stressing the learning opportunities associated to GVC integration, historical involvement of national States brings up the debate about technology as a central aspect of interstate disputes. In this sense, technology is seen as part of military modernization before being also used in civil contexts, what presupposes its role for national security and productive autonomy strategies. From this outlook, it is important to consider it as an asset that, incorporated in sophisticated machinery, can be the target of state policies besides the protection provided by intellectual property regimes, such as in export controls and inward-investment restrictions.Keywords: global value chains, innovation systems, intellectual monopoly, technological development
Procedia PDF Downloads 81