Search results for: quest based learning
30771 Determination of Water Pollution and Water Quality with Decision Trees
Authors: Çiğdem Bakır, Mecit Yüzkat
Abstract:
With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.Keywords: decision tree, water quality, water pollution, machine learning
Procedia PDF Downloads 8330770 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems
Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano
Abstract:
The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring
Procedia PDF Downloads 15130769 The Effects of Self-Graphing on the Reading Fluency of an Elementary Student with Learning Disabilities
Authors: Matthias Grünke
Abstract:
In this single-case study, we evaluated the effects of a self-graphing intervention to help students improve their reading fluency. Our participant was a 10-year-old girl with a suspected learning disability in reading. We applied an ABAB reversal design to test the efficacy of our approach. The dependent measure was the number of correctly read words from a children’s book within five minutes. Our participant recorded her daily performance using a simple line diagram. Results indicate that her reading rate improved simultaneously with the intervention and dropped as soon as the treatment was suspended. The findings give reasons for optimism that our simple strategy can be a very effective tool in supporting students with learning disabilities to boost their reading fluency.Keywords: single-case study, learning disabilities, elementary education, reading problems, reading fluency
Procedia PDF Downloads 11130768 A Machine Learning Pipeline for Real-Time Activity Detection on Low Computational Power Devices for Metaverse Applications
Authors: Amit Kumar, Amanpreet Chander, Ashish Sahani
Abstract:
This paper presents our recent work on real-time human activity detection based on the media pipe pipeline and machine learning algorithms. The proposed system can detect human activities, including running, jumping, squatting, bending to the left or right, and standing still. This is a robust solution for developing a yoga, dance, metaverse, and fitness application that checks for the correction of the pose without having any additional monitor like a personal trainer. MediaPipe solution offers an open-source cross-platform which utilizes a two-step detector-tracker ML pipeline for live detection of key landmarks on our body which can be used for motion data collection. The prediction of real-time poses uses a variety of machine learning techniques and different types of analysis. Without primarily relying on powerful desktop environments for inference, our method achieves real-time performance on the majority of contemporary mobile phones, desktops/laptops, Python, or even the web. Experimental results show that our method outperforms the existing method in terms of accuracy and real-time capability, achieving an accuracy of 99.92% on testing datasets.Keywords: human activity detection, media pipe, machine learning, metaverse applications
Procedia PDF Downloads 17930767 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens
Authors: Cynthia Adlerstein
Abstract:
Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher
Procedia PDF Downloads 35730766 Virtual Science Hub: An Open Source Platform to Enrich Science Teaching
Authors: Enrique Barra, Aldo Gordillo, Juan Quemada
Abstract:
This paper presents the Virtual Science Hub platform. It is an open source platform that combines a social network, an e-learning authoring tool, a video conference service and a learning object repository for science teaching enrichment. These four main functionalities fit very well together. The platform was released in April 2012 and since then it has not stopped growing. Finally we present the results of the surveys conducted and the statistics gathered to validate this approach.Keywords: e-learning, platform, authoring tool, science teaching, educational sciences
Procedia PDF Downloads 39730765 A Valid Professional Development Framework For Supporting Science Teachers In Relation To Inquiry-Based Curriculum Units
Authors: Fru Vitalis Akuma, Jenna Koenen
Abstract:
The science education community is increasingly calling for learning experiences that mirror the work of scientists. Although inquiry-based science education is aligned with these calls, the implementation of this strategy is a complex and daunting task for many teachers. Thus, policymakers and researchers have noted the need for continued teacher Professional Development (PD) in the enactment of inquiry-based science education, coupled with effective ways of reaching the goals of teacher PD. This is a complex problem for which educational design research is suitable. The purpose at this stage of our design research is to develop a generic PD framework that is valid as the blueprint of a PD program for supporting science teachers in relation to inquiry-based curriculum units. The seven components of the framework are the goal, learning theory, strategy, phases, support, motivation, and an instructional model. Based on a systematic review of the literature on effective (science) teacher PD, coupled with developer screening, we have generated a design principle per component of the PD framework. For example, as per the associated design principle, the goal of the framework is to provide science teachers with experiences in authentic inquiry, coupled with enhancing their competencies linked to the adoption, customization and design; then the classroom implementation and the revision of inquiry-based curriculum units. The seven design principles have allowed us to synthesize the PD framework, which, coupled with the design principles, are the preliminary outcomes of the current research. We are in the process of evaluating the content and construct validity of the framework, based on nine one-on-one interviews with experts in inquiry-based classroom and teacher learning. To this end, we have developed an interview protocol with the input of eight such experts in South Africa and Germany. Using the protocol, the expert appraisal of the PD framework will involve three experts from Germany, South Africa, and Cameroon, respectively. These countries, where we originate and/or work, provide a variety of inquiry-based science education contexts, making the countries suitable in the evaluation of the generic PD framework. Based on the evaluation, we will revise the framework and its seven design principles to arrive at the final outcomes of the current research. While the final content and construct a valid version of the framework will serve as an example of the needed ways through which effective inquiry-based science teacher PD may be achieved, the final design principles will be useful to researchers when transforming the framework for use in any specific educational context. For example, in our further research, we will transform the framework to one that is practical and effective in supporting inquiry-based practical work in resource-constrained physical sciences classrooms in South Africa. Researchers in other educational contexts may similarly consider the final framework and design principles in their work. Thus, our final outcomes will inform practice and research around the support of teachers to increase the incorporation of learning experiences that mirror the work of scientists in a worldwide manner.Keywords: design principles, educational design research, evaluation, inquiry-based science education, professional development framework
Procedia PDF Downloads 14930764 Analysing Tertiary Lecturers’ Teaching Practices and Their English Major Students’ Learning Practices with Information and Communication Technology (ICT) Utilization in Promoting Higher-Order Thinking Skills (HOTs)
Authors: Malini Ganapathy, Sarjit Kaur
Abstract:
Maximising learning with higher-order thinking skills with Information and Communications Technology (ICT) has been deep-rooted and emphasised in various developed countries such as the United Kingdom, the United States of America and Singapore. The transformation of the education curriculum in the Malaysia Education Development Plan (PPPM) 2013-2025 focuses on the concept of Higher Order Thinking (HOT) skills which aim to produce knowledgeable students who are critical and creative in their thinking and can compete at the international level. HOT skills encourage students to apply, analyse, evaluate and think creatively in and outside the classroom. In this regard, the National Education Blueprint (2013-2025) is grounded based on high-performing systems which promote a transformation of the Malaysian education system in line with the vision of Malaysia’s National Philosophy in achieving educational outcomes which are of world class status. This study was designed to investigate ESL students’ learning practices on the emphasis of promoting HOTs while using ICT in their curricula. Data were collected using a stratified random sampling where 100 participants were selected to take part in the study. These respondents were a group of undergraduate students who undertook ESL courses in a public university in Malaysia. A three-part questionnaire consisting of demographic information, students’ learning experience and ICT utilization practices was administered in the data collection process. Findings from this study provide several important insights on students’ learning experiences and ICT utilization in developing HOT skills.Keywords: English as a second language students, critical and creative thinking, learning, information and communication technology and higher order thinking skills
Procedia PDF Downloads 49030763 Combining Shallow and Deep Unsupervised Machine Learning Techniques to Detect Bad Actors in Complex Datasets
Authors: Jun Ming Moey, Zhiyaun Chen, David Nicholson
Abstract:
Bad actors are often hard to detect in data that imprints their behaviour patterns because they are comparatively rare events embedded in non-bad actor data. An unsupervised machine learning framework is applied here to detect bad actors in financial crime datasets that record millions of transactions undertaken by hundreds of actors (<0.01% bad). Specifically, the framework combines ‘shallow’ (PCA, Isolation Forest) and ‘deep’ (Autoencoder) methods to detect outlier patterns. Detection performance analysis for both the individual methods and their combination is reported.Keywords: detection, machine learning, deep learning, unsupervised, outlier analysis, data science, fraud, financial crime
Procedia PDF Downloads 9530762 Predicting Costs in Construction Projects with Machine Learning: A Detailed Study Based on Activity-Level Data
Authors: Soheila Sadeghi
Abstract:
Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.Keywords: cost prediction, machine learning, project management, random forest, neural networks
Procedia PDF Downloads 5730761 Enhancing Higher Education Teaching and Learning Processes: Examining How Lecturer Evaluation Make a Difference
Authors: Daniel Asiamah Ameyaw
Abstract:
This research attempts to investigate how lecturer evaluation makes a difference in enhancing higher education teaching and learning processes. The research questions to guide this research work states first as, “What are the perspectives on the difference made by evaluating academic teachers in order to enhance higher education teaching and learning processes?” and second, “What are the implications of the findings for Policy and Practice?” Data for this research was collected mainly through interviewing and partly documents review. Data analysis was conducted under the framework of grounded theory. The findings showed that for individual lecturer level, lecturer evaluation provides a continuous improvement of teaching strategies, and serves as source of data for research on teaching. At the individual student level, it enhances students learning process; serving as source of information for course selection by students; and by making students feel recognised in the educational process. At the institutional level, it noted that lecturer evaluation is useful in personnel and management decision making; it assures stakeholders of quality teaching and learning by setting up standards for lecturers; and it enables institutions to identify skill requirement and needs as a basis for organising workshops. Lecturer evaluation is useful at national level in terms of guaranteeing the competencies of graduates who then provide the needed manpower requirement of the nation. Besides, it mentioned that resource allocation to higher educational institution is based largely on quality of the programmes being run by the institution. The researcher concluded, that the findings have implications for policy and practice, therefore, higher education managers are expected to ensure that policy is implemented as planned by policy-makers so that the objectives can successfully be achieved.Keywords: academic quality, higher education, lecturer evaluation, teaching and learning processes
Procedia PDF Downloads 14330760 Pedagogy of Possibility: Exploring the TVET of Southern African Workers on Foreign Vessels Mediated by Ubiquitous Google and Microsoft apps
Authors: Robin Ferguson
Abstract:
The context which this paper explores is the provision of Technical Vocational Education and Training (TVET) of southern African workers at sea on local and foreign vessels using a blended learning approach. The pedagogical challenge of providing quality education in this context is that multiple African and foreign languages and cultural norms are found amongst the all-male crew; and there are widely differing levels of education, low levels of digital literacy and limited connectivity. The methodology used is a nested case study. The study describes the mechanisms used to provide ongoing, real-time workplace TVET on two foreign vessels. Some training was done in person when the vessels came into port, however, the majority of the TVET was achieved from shore to ship using a combination of commonly available Google and Microsoft Apps and WhatsApp. Voice, video and text in multiple languages were used to accommodate different learning styles. The learning was supported by the development of learning networks using social media. This paper also reflects on the shore-based organisational change processes required to support sea learning. The conceptual framework used is the Theory of Practice Architectures (TPA) as is provides a site-ontological perspective of the sayings/thinkings, doings and relatings of this workplace training which is multiplanar as it plays out at sea and ashore, in-person and on-line. Using TPA, the overarching practice architectures and supporting structures which confound or enable these learning practices are revealed. The contribution which this paper makes is an insight into an innovative vocational pedagogy which promotes ICT-mediated learning amongst workers who suffer from low levels of literacies and limited ICT-access and who work and live in remote places. It is a pedagogy of possibility which crosses the digital divide.Keywords: theory of practice architecture, microsoft, google, whatsapp, vocational pedagogy, mariners, distributed workplaces
Procedia PDF Downloads 8130759 A Case Study of Deep Learning for Disease Detection in Crops
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2.Keywords: convolutional neural networks, deep learning, disease detection, precision agriculture
Procedia PDF Downloads 25930758 Multi-Classification Deep Learning Model for Diagnosing Different Chest Diseases
Authors: Bandhan Dey, Muhsina Bintoon Yiasha, Gulam Sulaman Choudhury
Abstract:
Chest disease is one of the most problematic ailments in our regular life. There are many known chest diseases out there. Diagnosing them correctly plays a vital role in the process of treatment. There are many methods available explicitly developed for different chest diseases. But the most common approach for diagnosing these diseases is through X-ray. In this paper, we proposed a multi-classification deep learning model for diagnosing COVID-19, lung cancer, pneumonia, tuberculosis, and atelectasis from chest X-rays. In the present work, we used the transfer learning method for better accuracy and fast training phase. The performance of three architectures is considered: InceptionV3, VGG-16, and VGG-19. We evaluated these deep learning architectures using public digital chest x-ray datasets with six classes (i.e., COVID-19, lung cancer, pneumonia, tuberculosis, atelectasis, and normal). The experiments are conducted on six-classification, and we found that VGG16 outperforms other proposed models with an accuracy of 95%.Keywords: deep learning, image classification, X-ray images, Tensorflow, Keras, chest diseases, convolutional neural networks, multi-classification
Procedia PDF Downloads 9230757 Subspace Rotation Algorithm for Implementing Restricted Hopfield Network as an Auto-Associative Memory
Authors: Ci Lin, Tet Yeap, Iluju Kiringa
Abstract:
This paper introduces the subspace rotation algorithm (SRA) to train the Restricted Hopfield Network (RHN) as an auto-associative memory. Subspace rotation algorithm is a gradient-free subspace tracking approach based on the singular value decomposition (SVD). In comparison with Backpropagation Through Time (BPTT) on training RHN, it is observed that SRA could always converge to the optimal solution and BPTT could not achieve the same performance when the model becomes complex, and the number of patterns is large. The AUTS case study showed that the RHN model trained by SRA could achieve a better structure of attraction basin with larger radius(in general) than the Hopfield Network(HNN) model trained by Hebbian learning rule. Through learning 10000 patterns from MNIST dataset with RHN models with different number of hidden nodes, it is observed that an several components could be adjusted to achieve a balance between recovery accuracy and noise resistance.Keywords: hopfield neural network, restricted hopfield network, subspace rotation algorithm, hebbian learning rule
Procedia PDF Downloads 11830756 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance
Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan
Abstract:
A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection
Procedia PDF Downloads 12530755 Relationship between Learning Methods and Learning Outcomes: Focusing on Discussions in Learning
Authors: Jaeseo Lim, Jooyong Park
Abstract:
Although there is ample evidence that student involvement enhances learning, college education is still mainly centered on lectures. However, in recent years, the effectiveness of discussions and the use of collective intelligence have attracted considerable attention. This study intends to examine the empirical effects of discussions on learning outcomes in various conditions. Eighty eight college students participated in the study and were randomly assigned to three groups. Group 1 was told to review material after a lecture, as in a traditional lecture-centered class. Students were given time to review the material for themselves after watching the lecture in a video clip. Group 2 participated in a discussion in groups of three or four after watching the lecture. Group 3 participated in a discussion after studying on their own. Unlike the previous two groups, students in Group 3 did not watch the lecture. The participants in the three groups were tested after studying. The test questions consisted of memorization problems, comprehension problems, and application problems. The results showed that the groups where students participated in discussions had significantly higher test scores. Moreover, the group where students studied on their own did better than that where students watched a lecture. Thus discussions are shown to be effective for enhancing learning. In particular, discussions seem to play a role in preparing students to solve application problems. This is a preliminary study and other age groups and various academic subjects need to be examined in order to generalize these findings. We also plan to investigate what kind of support is needed to facilitate discussions.Keywords: discussions, education, learning, lecture, test
Procedia PDF Downloads 17630754 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions
Authors: Vikrant Gupta, Amrit Goswami
Abstract:
The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition
Procedia PDF Downloads 13630753 A Neurosymbolic Learning Method for Uplink LTE-A Channel Estimation
Authors: Lassaad Smirani
Abstract:
In this paper we propose a Neurosymbolic Learning System (NLS) as a channel estimator for Long Term Evolution Advanced (LTE-A) uplink. The proposed system main idea based on Neural Network has modules capable of performing bidirectional information transfer between symbolic module and connectionist module. We demonstrate various strengths of the NLS especially the ability to integrate theoretical knowledge (rules) and experiential knowledge (examples), and to make an initial knowledge base (rules) converted into a connectionist network. Also to use empirical knowledge witch by learning will have the ability to revise the theoretical knowledge and acquire new one and explain it, and finally the ability to improve the performance of symbolic or connectionist systems. Compared with conventional SC-FDMA channel estimation systems, The performance of NLS in terms of complexity and quality is confirmed by theoretical analysis and simulation and shows that this system can make the channel estimation accuracy improved and bit error rate decreased.Keywords: channel estimation, SC-FDMA, neural network, hybrid system, BER, LTE-A
Procedia PDF Downloads 39430752 Use of Smartphone in Practical Classes to Facilitate Teaching and Learning of Microscopic Analysis and Interpretation of Tissues Sections
Authors: Lise P. Labéjof, Krisnayne S. Ribeiro, Nicolle P. dos Santos
Abstract:
An unrecorded experiment of use of the smartphone as a tool for practical classes of histology is presented in this article. Behavior, learning of the students of three science courses at the University were analyzed and compared as well as the mode of teaching of this discipline and the appreciation of the students, using either digital photographs taken by phone or drawings for record microscopic observations, analyze and interpret histological sections of human or animal tissues.Keywords: cell phone, digital micrographies, learning of sciences, teaching practices
Procedia PDF Downloads 59630751 Videoconference Technology: An Attractive Vehicle for Challenging and Changing Tutors Practice in Open and Distance Learning Environment
Authors: Ramorola Mmankoko Ziphorah
Abstract:
Videoconference technology represents a recent experiment of technology integration into teaching and learning in South Africa. Increasingly, videoconference technology is commonly used as a substitute for the traditional face-to-face approaches to teaching and learning in helping tutors to reshape and change their teaching practices. Interestingly, though, some studies point out that videoconference technology is commonly used for knowledge dissemination by tutors and not so much for the actual teaching of course content in Open and Distance Learning context. Though videoconference technology has become one of the dominating technologies available among Open and Distance Learning institutions, it is not clear that it has been used as effectively to bridge the learning distance in time, geography, and economy. While tutors are prepared theoretically, in most tutor preparation programs, on the use of videoconference technology, there are still no practical guidelines on how they should go about integrating this technology into their course teaching. Therefore, there is an urgent need to focus on tutor development, specifically on their capacities and skills to use videoconference technology. The assumption is that if tutors become competent in the use of the videoconference technology for course teaching, then their use in Open and Distance Learning environment will become more commonplace. This is the imperative of the 4th Industrial Revolution (4IR) on education generally. Against the current vacuum in the practice of using videoconference technology for course teaching, the current study proposes a qualitative phenomenological approach to investigate the efficacy of videoconferencing as an approach to student learning. Using interviews and observation data from ten participants in Open and Distance Learning institution, the author discusses how dialogue and structure interacted to provide the participating tutors with a rich set of opportunities to deliver course content. The findings to this study highlight various challenges experienced by tutors when using videoconference technology. The study suggests tutor development programs on their capacity and skills and on how to integrate this technology with various teaching strategies in order to enhance student learning. The author argues that it is not merely the existence of the structure, namely the videoconference technology, that provides the opportunity for effective teaching, but that is the interactions, namely, the dialogue amongst tutors and learners that make videoconference technology an attractive vehicle for challenging and changing tutors practice.Keywords: open distance learning, transactional distance, tutor, videoconference
Procedia PDF Downloads 12930750 The Relationships between How and Why Students Learn and Academic Achievement
Authors: S. Chee Choy, Daljeet Singh Sedhu
Abstract:
This study examines the relationships between how and why students learned and academic achievement for 2646 university students from various faculties. The LALQ, a self-report measure of student approaches to learning was administered and academic achievement data were obtained from student CGPA. The results showed significant differences in the approach to learning of male and female students. How and why students learned can influence their achievement and efficacy as well. High and low achievers have different learning behaviours. High female achievers were more likely to learn for a better future and be persistent in it. Meanwhile high male achievers were more likely to seek approval from their peers and be more confident about graduating on time from their university. The implications of individual differences and limitations of the study are discussed.Keywords: student learning, learner awareness, student achievement, LALQ
Procedia PDF Downloads 34630749 Creation of an Integrated Development Environment to Assist and Optimize the Learning the Languages C and C++
Authors: Francimar Alves, Marcos Castro, Marllus Lustosa
Abstract:
In the context of the teaching of computer programming, the choice of tool to use is very important in the initiation and continuity of learning a programming language. The literature tools do not always provide usability and pedagogical dynamism clearly and accurately for effective learning. This hypothesis implies fall in productivity and difficulty of learning a particular programming language by students. The integrated development environments (IDEs) Dev-C ++ and Code :: Blocks are widely used in introductory courses for undergraduate courses in Computer Science for learning C and C ++ languages. However, after several years of discontinuity maintaining the source code of Dev-C ++ tool, the continued use of the same in the teaching and learning process of the students of these institutions has led to difficulties, mainly due to the lack of update by the official developers, which resulted in a sequence of problems in using it on educational settings. Much of the users, dissatisfied with the IDE Dev-C ++, migrated to Code :: Blocks platform targeting the more dynamic range in the learning process of the C and C ++ languages. Nevertheless, there is still the need to create a tool that can provide the resources of most IDE's software development literature, however, more interactive, simple, accurate and efficient. This motivation led to the creation of Falcon C ++ tool, IDE that brings with features that turn it into an educational platform, which focuses primarily on increasing student learning index in the early disciplines of programming and algorithms that use the languages C and C ++ . As a working methodology, a field research to prove the truth of the proposed tool was used. The test results and interviews with entry-level students and intermediate in a postsecondary institution gave basis for the composition of this work, demonstrating a positive impact on the use of the tool in teaching programming, showing that the use of Falcon C ++ software is beneficial in the teaching process of the C and C ++ programming languages.Keywords: ide, education, learning, development, language
Procedia PDF Downloads 44330748 Promoting Non-Formal Learning Mobility in the Field of Youth
Authors: Juha Kettunen
Abstract:
The purpose of this study is to develop a framework for the assessment of research and development projects. The assessment map is developed in this study based on the strategy map of the balanced scorecard approach. The assessment map is applied in a project that aims to reduce the inequality and risk of exclusion of young people from disadvantaged social groups. The assessment map denotes that not only funding but also necessary skills and qualifications should be carefully assessed in the implementation of the project plans so as to achieve the objectives of projects and the desired impact. The results of this study are useful for those who want to develop the implementation of the Erasmus+ Programme and the project teams of research and development projects.Keywords: non-formal learning, youth work, social inclusion, innovation
Procedia PDF Downloads 29430747 Developing Abbreviated Courses
Authors: Lynette Nickleberry Stewart
Abstract:
The present presentation seeks to explore distinction across disciplines in the appropriateness of accelerated courses and suggestions for implementing accelerated courses in various disciplines. Grounded in a review of research on accelerated learning (AL), this presentation will discuss the intradisciplinary appropriateness of accelerated courses for various topics and student types, and make suggestions for implementing augmented courses. Meant to inform an emerging ‘handbook’ of accelerated course development, facilitators will lead participants in a discussion of personal challenges and triumphs in their attempts at accelerated course design.Keywords: adult learning, abbreviated courses, accelerated learning, course design
Procedia PDF Downloads 12030746 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 1130745 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 58630744 Effects of Closed-Caption Programs on EFL Learners' Listening Comprehension and Vocabulary Learning
Authors: Bahman Gorjian
Abstract:
This study investigated the effects of closed-captioning on vocabulary learning and listening comprehension of English-language movies. Captioning is thus an effective language-learning tool for persons learning English as a second language. Because students may learn a foreign language "passively," utilizing subtitles on television could make learning English enjoyable for them. Closed captioning is an electrical technique that converts spoken words from a television program's audio into written text that mimics subtitles in another language. The findings of this study showed the importance of using closed-captioning software when learning a foreign language. As a result, these must be considered when teaching EFL/ESL. The influence of watching movies with closed captions on vocabulary and hearing is compared in this study. This goal can be reached by employing a closed-captioned movie as a teaching tool in the classroom. This research was critical because it demonstrates the advantages of closed-captioning programs in EFL classrooms for both teachers and students. The study's findings assisted teachers in better understanding how to employ closed captioning as a teaching tool in the classroom. The effects will be seen as even more significant for language learners who use the method.Keywords: closed-captions, listening, comprehension, vcabulary
Procedia PDF Downloads 8930743 Web 2.0 in Higher Education: The Instructors’ Acceptance in Higher Educational Institutes in Kingdom of Bahrain
Authors: Amal M. Alrayes, Hayat M. Ali
Abstract:
Since the beginning of distance education with the rapid evolution of technology, the social network plays a vital role in the educational process to enforce the interaction been the learners and teachers. There are many Web 2.0 technologies, services and tools designed for educational purposes. This research aims to investigate instructors’ acceptance towards web-based learning systems in higher educational institutes in Kingdom of Bahrain. Questionnaire is used to investigate the instructors’ usage of Web 2.0 and the factors affecting their acceptance. The results confirm that instructors had high accessibility to such technologies. However, patterns of use were complex. Whilst most expressed interest in using online technologies to support learning activities, learners seemed cautious about other values associated with web-based system, such as the shared construction of knowledge in a public format. The research concludes that there are main factors that affect instructors’ adoption which are security, performance expectation, perceived benefits, subjective norm, and perceived usefulness.Keywords: Web 2.0, higher education, acceptance, students' perception
Procedia PDF Downloads 33730742 An Energy Efficient Clustering Approach for Underwater Wireless Sensor Networks
Authors: Mohammad Reza Taherkhani
Abstract:
Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make a connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.Keywords: underwater sensor networks, clustering, learning automata, energy consumption
Procedia PDF Downloads 361