Search results for: multimedia learning
4997 Probing Syntax Information in Word Representations with Deep Metric Learning
Authors: Bowen Ding, Yihao Kuang
Abstract:
In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.Keywords: deep metric learning, syntax tree probing, natural language processing, word representations
Procedia PDF Downloads 684996 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game
Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin
Abstract:
Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design
Procedia PDF Downloads 4214995 A Report on the Elearning Programme of the Irish College of General Practitioners Which Can Address Continuing Education Needs of Primary Care Physicians
Authors: Nicholas P. Fenlon, Aisling Lavelle, David Mclean, Margaret O'riordan
Abstract:
Background: The case for continuing professional development has been well made, and was formalized in Ireland in recent years through the enactment of the Medical Practitioner’s Act, which requires registered medical practitioners to complete a minimum of 50 hours CPD each year. The ICGP, who have been providing CPD opportunities to its members for many years, have responded to this need by developing a series of evidence-based, high-quality, multimedia modules across a range of clinical and non-clinical areas. (More traditional education opportunities are still being provided by the college also). Overview of Programme: The first module was released in September 2011, since when the eLearning program has grown steadily, and there are currently almost 20 modules available, with a further 5 in production. Each module contains three to six 10-minute video lessons, which use a combination of graphics, images, text, voice-over and clinical clips. These are supported by supplementary videos of expert pieces-to-camera, Q&As with content experts, clinical scenarios, external links and relevant documentation and other resources. Successful completion of MCQs will result in a Certificate of Completion, which can be printed or stored in Professional Competence portfolio. The Medical Practitioner’s Act requires doctors to gather CPD credits across 8 domains of practice, and various eLearning modules have been developed to address each. For instance, modules with a strong clinical content would include Management of Hypertension, Management of COPD, and Management of Asthma. Other modules focus on health promotion such as Promoting Smoking Cessation, Promoting Physical Activity, and Addressing Childhood Obesity. Modules where communication skills are keys include modules on Suicide Prevention and Management of Depression. Other modules, currently in development include non-clinical topics around risk management, including Confidentiality, Consent etc. Each module is developed by a core group, which includes where possible, a GP with a special interest in the area, and a content expert(s). The college works closely with a medical education consultant and a production company in developing and producing the modules. Modules can be accessed (with password) through the ICGP website and are available free to all ICGP members. Summary of Evaluation: There are over 1700 registered users to date (over 55% of College membership). The program was evaluated using an online survey in 2013 (N = 144/950 – 12%) and results were very positive overall but provided material for the further improvement of the program also. Future Plans: While knowledge can be imparted well through eLearning, skills and attitudes are more difficult to influence through an online environment. The college is now developing a series of linked workshops, which will lead to ICGP Professional Competence Awards. The first pilot workshop is scheduled for February 2015 and is Cardiology-themed. Participants will be required to complete the following 4 modules in advance of attending – Management of Hypertension, Management of Heart Failure, Promoting Smoking Cessation, and Promoting Physical Activity. The workshop will be case-based and interactive, addressing ECG Interpretation in General Practice. Conclusions: The ICGP have responded to members needs for high-quality evidence-based education delivered in a way that suits GPs.Keywords: CPD opportunities, evidence-based, high quality, multimedia modules across a range of clinical and non-clinical areas, medical practitioner’s act
Procedia PDF Downloads 5994994 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images
Authors: Masood Varshosaz, Kamyar Hasanpour
Abstract:
In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.Keywords: human recognition, deep learning, drones, disaster mitigation
Procedia PDF Downloads 944993 Constructivist Design Approaches to Video Production for Distance Education in Business and Economics
Authors: C. von Essen
Abstract:
This study outlines and evaluates a constructivist design approach to the creation of educational video on postgraduate business degree programmes. Many online courses are tapping into the educational affordances of video, as this form of online learning has the potential to create rich, multimodal experiences. And yet, in many learning contexts video is still being used to transmit instruction to passive learners, rather than promote learner engagement and knowledge creation. Constructivism posits the notion that learning is shaped as students make connections between their experiences and ideas. This paper pivots on the following research question: how can we design educational video in ways which promote constructivist learning and stimulate analytic viewing? By exploring and categorizing over two thousand educational videos created since 2014 for over thirty postgraduate courses in business, economics, mathematics and statistics, this paper presents and critically reflects on a taxonomy of video styles and features. It links the pedagogical intent of video – be it concept explanation, skill demonstration, feedback, real-world application of ideas, community creation, or the cultivation of course narrative – to specific presentational characteristics such as visual effects including diagrammatic and real-life graphics and aminations, commentary and sound options, chronological sequencing, interactive elements, and presenter set-up. The findings of this study inform a framework which captures the pedagogical, technological and production considerations instructional designers and educational media specialists should be conscious of when planning and preparing the video. More broadly, the paper demonstrates how learning theory and technology can coalesce to produce informed and pedagogical grounded instructional design choices. This paper reveals how crafting video in a more conscious and critical manner can produce powerful, new educational design.Keywords: educational video, constructivism, instructional design, business education
Procedia PDF Downloads 2364992 Spontaneous Message Detection of Annoying Situation in Community Networks Using Mining Algorithm
Authors: P. Senthil Kumari
Abstract:
Main concerns in data mining investigation are social controls of data mining for handling ambiguity, noise, or incompleteness on text data. We describe an innovative approach for unplanned text data detection of community networks achieved by classification mechanism. In a tangible domain claim with humble secrecy backgrounds provided by community network for evading annoying content is presented on consumer message partition. To avoid this, mining methodology provides the capability to unswervingly switch the messages and similarly recover the superiority of ordering. Here we designated learning-centered mining approaches with pre-processing technique to complete this effort. Our involvement of work compact with rule-based personalization for automatic text categorization which was appropriate in many dissimilar frameworks and offers tolerance value for permits the background of comments conferring to a variety of conditions associated with the policy or rule arrangements processed by learning algorithm. Remarkably, we find that the choice of classifier has predicted the class labels for control of the inadequate documents on community network with great value of effect.Keywords: text mining, data classification, community network, learning algorithm
Procedia PDF Downloads 5084991 Working with Interpreters: Using Role Play to Teach Social Work Students
Authors: Yuet Wah Echo Yeung
Abstract:
Working with people from minority ethnic groups, refugees and asylum seeking communities who have limited proficiency in the language of the host country often presents a major challenge for social workers. Because of language differences, social workers need to work with interpreters to ensure accurate information is collected for their assessment and intervention. Drawing from social learning theory, this paper discusses how role play was used as an experiential learning exercise in a training session to help social work students develop skills when working with interpreters. Social learning theory posits that learning is a cognitive process that takes place in a social context when people observe, imitate and model others’ behaviours. The roleplay also helped students understand the role of the interpreter and the challenges they may face when they rely on interpreters to communicate with service users and their family. The first part of the session involved role play. A tutor played the role of social worker and deliberately behaved in an unprofessional manner and used inappropriate body language when working alongside the interpreter during a home visit. The purpose of the roleplay is not to provide a positive role model for students to ‘imitate’ social worker’s behaviours. Rather it aims to active and provoke internal thinking process and encourages students to critically consider the impacts of poor practice on relationship building and the intervention process. Having critically reflected on the implications for poor practice, students were then asked to play the role of social worker and demonstrate what good practice should look like. At the end of the session, students remarked that they learnt a lot by observing the good and bad example; it showed them what not to do. The exercise served to remind students how practitioners can easily slip into bad habits and of the importance of respect for the cultural difference when working with people from different cultural backgrounds.Keywords: role play, social learning theory, social work practice, working with interpreters
Procedia PDF Downloads 1794990 Prep: Pause, Reset, Establish Expectations, and Proceed. A Practical Approach for Classroom Transitions
Authors: Shane-Anthony Smith
Abstract:
Teachers across grade levels and content areas face a myriad of challenges in the classroom. From inconsistent attendance to disruptive behaviors, these challenges can have a dire impact on the educational space, untimely leading to a loss of instructional time and student disenfranchisement from learning. While these challenges are not new to the educational landscape, the post-COVID classroom has, in many instances, been more severely impacted by behaviors that are not conducive to learning. Despite the mounting challenges, the role of the teacher remains unchanged - that is, to create and maintain a safe environment that is conducive to learning and promotes successful learning outcomes. Accomplishing this feat is no easy task. Yet, there are steps teachers can - indeed, must - take to better set themselves and their students up for success. The key to achieving this success is effective classroom transitions. This paper presents a four-step approach for teachers to engage in successful classroom transitions to promote meaningful student engagement and active positive learning outcomes. The transition strategy I will explore is called PREP (Pause, Reset, Establish Expectations, and Proceed). I developed this strategy in my work as a Residency Director for my university’s teacher residency program. In this role, I am tasked with coaching emerging teachers and their in-service teaching mentors in the field, as well as providing mentorship to special education resident teachers pursuing teaching degrees in the program. As a teacher educator, being in Middle and High school classrooms provides an intricate and critical understanding of the challenges, opportunities, and possibilities in the classroom. For this paper, I will explore how teachers can optimize the opportunities PREP provides to keep students engaged and, thus, improve student achievement. I will describe the approach, explain its use, and provide case-study examples of its classroom application.Keywords: classroom management, teaching strategies, student engagement, classroom transition
Procedia PDF Downloads 794989 The Reflections of the K-12 English Language Teachers on the Implementation of the K-12 Basic Education Program in the Philippines
Authors: Dennis Infante
Abstract:
This paper examined the reflections of teachers on curriculum reforms, the implementation of the K-12 Basic Education Program in the Philippines. The results revealed that problems and concerns raised by teachers could be classified into curriculum materials and design; competence, readiness and motivation of the teachers; the learning environment, and support systems; readiness, competence and motivation of students; and other relevant factors. The best features of the K-12 curriculum reforms included (1) the components, curriculum materials; (2) the design, structure and delivery of the lessons; (3) the framework and theoretical approach; (3) the qualities of the teaching-learning activities; (4) and other relevant features. With the demanding task of implementing the new curriculum, the teachers expressed their needs which included (1) making the curriculum materials available to achieve the goals of the curriculum reforms; (2) enrichment of the learning environments; (3) motivating and encouraging the teachers to embrace change; (4) providing appropriate support systems; (5) re-tooling, and empowering teachers to implement the curriculum reforms; and (6) other relevant factors. The research concluded with a synthesis that provided a paradigm for implementing curriculum reforms which recognizes the needs of the teachers and the features of the new curriculum.Keywords: curriculum reforms, K-12, teachers' reflections, implementing curriculum change
Procedia PDF Downloads 2794988 Assessing the Corporate Identity of Malaysia Universities in the East Coast Region with the Market Conditions in Ensuring Self-Sustainability: A Study on Universiti Sultan Zainal Abidin
Authors: Suffian Hadi Ayub, Mohammad Rezal Hamzah, Nor Hafizah Abdullah, Sharipah Nur Mursalina Syed Azmy, Hishamuddin Salim
Abstract:
The liberalisation of the education industry has exposed the institute of higher learning (IHL) in Malaysia to the financial challenges. Without good financial standing, public institution will rely on the government funding. Ostensibly, this contradicts with the government’s aspiration to make universities self-sufficient. With stiff competition from private institutes of higher learning, IHL need to be prepared at the forefront level. The corporate identity itself is the entrance to the world of higher learning and it is in this uniqueness, it will be able to distinguish itself from competitors. This paper examined the perception of the stakeholders at one of the public universities in the east coast region in Malaysia on the perceived reputation and how the university communicate its preparedness for self-sustainability through corporate identity. The findings indicated while the stakeholders embraced the challenges in facing the stiff competition and struggling market conditions, most of them felt the university should put more efforts in mobilising the corporate identity to its constituencies.Keywords: communication, corporate identity, market conditions, universities
Procedia PDF Downloads 3144987 Machine Learning Invariants to Detect Anomalies in Secure Water Treatment
Authors: Jonathan Heng, Yoong Cheah Huei
Abstract:
A strategic model that does not trigger any false alarms to detect anomalies in Secure Water Treatment (SWaT) test bed is presented. This model uses machine learning invariants formulated from streamlining the general form of Auto-Regressive models with eXogenous input. A creative generalized CUSUM algorithm to integrate the invariants and the detection strategy technique is successfully developed and tested in the SWaT Programmable Logic Controllers (PLCs). Three steps to fine-tune parameters, b and τ in the generalized algorithm are stated and an example used to demonstrate the tuning process is discussed. This approach can swiftly and effectively detect various scopes of cyber-attacks such as multiple points single stage and multiple points multiple stages in SWaT. This technique can be applied in water treatment plants and other cyber physical systems like power and gas plants too.Keywords: machine learning invariants, generalized CUSUM algorithm with invariants and detection strategy, scope of cyber attacks, strategic model, tuning parameters
Procedia PDF Downloads 1814986 Promoting Non-Formal Learning Mobility in the Field of Youth
Authors: Juha Kettunen
Abstract:
The purpose of this study is to develop a framework for the assessment of research and development projects. The assessment map is developed in this study based on the strategy map of the balanced scorecard approach. The assessment map is applied in a project that aims to reduce the inequality and risk of exclusion of young people from disadvantaged social groups. The assessment map denotes that not only funding but also necessary skills and qualifications should be carefully assessed in the implementation of the project plans so as to achieve the objectives of projects and the desired impact. The results of this study are useful for those who want to develop the implementation of the Erasmus+ Programme and the project teams of research and development projects.Keywords: non-formal learning, youth work, social inclusion, innovation
Procedia PDF Downloads 2944985 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning
Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar
Abstract:
Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence
Procedia PDF Downloads 784984 Partial Knowledge Transfer Between the Source Problem and the Target Problem in Genetic Algorithms
Authors: Terence Soule, Tami Al Ghamdi
Abstract:
To study how the partial knowledge transfer may affect the Genetic Algorithm (GA) performance, we model the Transfer Learning (TL) process using GA as the model solver. The objective of the TL is to transfer the knowledge from one problem to another related problem. This process imitates how humans think in their daily life. In this paper, we proposed to study a case where the knowledge transferred from the S problem has less information than what the T problem needs. We sampled the transferred population using different strategies of TL. The results showed transfer part of the knowledge is helpful and speeds the GA process of finding a solution to the problem.Keywords: transfer learning, partial transfer, evolutionary computation, genetic algorithm
Procedia PDF Downloads 1324983 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing
Authors: Tolulope Aremu
Abstract:
This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving
Procedia PDF Downloads 314982 A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters
Authors: Wanyi Zhu, Xia Ming, Huafeng Wang, Junda Chen, Lu Liu, Jiangwei Jiang, Guohua Liu
Abstract:
Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations.Keywords: Alibaba data centers, anomaly detection, big data computation, dynamic ensemble learning
Procedia PDF Downloads 2014981 From Bureaucracy to Organizational Learning Model: An Organizational Change Process Study
Authors: Vania Helena Tonussi Vidal, Ester Eliane Jeunon
Abstract:
This article aims to analyze the change processes of management related bureaucracy and learning organization model. The theoretical framework was based on Beer and Nohria (2001) model, identified as E and O Theory. Based on this theory the empirical research was conducted in connection with six key dimensions: goal, leadership, focus, process, reward systems and consulting. We used a case study of an educational Institution located in Barbacena, Minas Gerais. This traditional center of technical knowledge for long time adopted the bureaucratic way of management. After many changes in a business model, as the creation of graduate and undergraduate courses they decided to make a deep change in management model that is our research focus. The data were collected through semi-structured interviews with director, managers and courses supervisors. The analysis were processed by the procedures of Collective Subject Discourse (CSD) method, develop by Lefèvre & Lefèvre (2000), Results showed the incremental growing of management model toward a learning organization. Many impacts could be seeing. As negative factors we have: people resistance; poor information about the planning and implementation process; old politics inside the new model and so on. Positive impacts are: new procedures in human resources, mainly related to manager skills and empowerment; structure downsizing, open discussions channel; integrated information system. The process is still under construction and now great stimulus is done to managers and employee commitment in the process.Keywords: bureaucracy, organizational learning, organizational change, E and O theory
Procedia PDF Downloads 4344980 Reflective Thinking and Experiential Learning – A Quasi-Experimental Quanti-Quali Response to Greater Diversification of Activities, Greater Integration of Student Profiles
Authors: Paulo Sérgio Ribeiro de Araújo Bogas
Abstract:
Although several studies have assumed (at least implicitly) that learners' approaches to learning develop into deeper approaches to higher education, there appears to be no clear theoretical basis for this assumption and no empirical evidence. As a scientific contribution to this discussion, a pedagogical intervention of a quasi-experimental nature was developed, with a mixed methodology, evaluating the intervention within a single curricular unit of Marketing, using cases based on real challenges of brands, business simulation, and customer projects. Primary and secondary experiences were incorporated in the intervention: the primary experiences are the experiential activities themselves; the secondary experiences result from the primary experience, such as reflection and discussion in work teams. A diversified learning relationship was encouraged through the various connections between the different members of the learning community. The present study concludes that in the same context, the student's responses can be described as students who reinforce the initial deep approach, students who maintain the initial deep approach level, and others who change from an emphasis on the deep approach to one closer to superficial. This typology did not always confirm studies reported in the literature, namely, whether the initial level of deep processing would influence the superficial and the opposite. The result of this investigation points to the inclusion of pedagogical and didactic activities that integrate different motivations and initial strategies, leading to the possible adoption of deep approaches to learning since it revealed statistically significant differences in the difference in the scores of the deep/superficial approach and the experiential level. In the case of real challenges, the categories of “attribution of meaning and meaning of studied” and the possibility of “contact with an aspirational context” for their future professional stand out. In this category, the dimensions of autonomy that will be required of them were also revealed when comparing the classroom context of real cases and the future professional context and the impact they may have on the world. Regarding the simulated practice, two categories of response stand out: on the one hand, the motivation associated with the possibility of measuring the results of the decisions taken, an awareness of oneself, and, on the other hand, the additional effort that this practice required for some of the students.Keywords: experiential learning, higher education, mixed methods, reflective learning, marketing
Procedia PDF Downloads 834979 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones
Authors: Vineesh Amin, Ananya Agrawal
Abstract:
In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling
Procedia PDF Downloads 2094978 Developing Second Language Learners’ Reading Comprehension through Content and Language Integrated Learning
Authors: Kaine Gulozer
Abstract:
A strong methodological conception in the practice of teaching, content, and language integrated learning (CLIL) is adapted to boost efficiency in the second language (L2) instruction with a range of proficiency levels. This study aims to investigate whether the incorporation of two different mediums of meaningful CLIL reading activities (in-school and out-of-school settings) influence L2 students’ development of comprehension skills differently. CLIL based instructional methodology was adopted and total of 50 preparatory year students (N=50, 25 students for each proficiency level) from two distinct language proficiency learners (elementary and intermediate) majoring in engineering faculties were recruited for the study. Both qualitative and quantitative methods through a post-test design were adopted. Data were collected through a questionnaire, a reading comprehension test and a semi-structured interview addressed to the two proficiency groups. The results show that both settings in relation to the development of reading comprehension are beneficial, whereas the impact of the reading activities conducted in school settings was higher at the elementary language level of students than that of the one conducted out-of-class settings based on the reported interview results. This study suggests that the incorporation of meaningful CLIL reading activities in both settings for both proficiency levels could create students’ self-awareness of their language learning process and the sense of ownership in successful improvements of field-specific reading comprehension. Further potential suggestions and implications of the study were discussed.Keywords: content and language integrated learning, in-school setting, language proficiency, out-of-school setting, reading comprehension
Procedia PDF Downloads 1464977 Opinions of Pre-Service Teachers on Online Language Teaching: COVID-19 Pandemic Perspective
Authors: Neha J. Nandaniya
Abstract:
In the present research paper researcher put focuses on the opinions of pre-service teachers have been taken regarding online language teaching, which was held during the COVID-19 pandemic and is still going on. The researcher developed a three-point rating scale in Google Forms to find out the views of trainees on online language learning, in which 167 B. Ed. trainees having language content and method gave their responses. After scoring the responses obtained by the investigator, the chi-square value was calculated, and the findings were concluded. The major finding of the study is language learning is not as effective as offline teaching mode.Keywords: online language teaching, ICT competency, B. Ed. trainees, COVID-19 pandemic
Procedia PDF Downloads 844976 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 2494975 Recommender System Based on Mining Graph Databases for Data-Intensive Applications
Authors: Mostafa Gamal, Hoda K. Mohamed, Islam El-Maddah, Ali Hamdi
Abstract:
In recent years, many digital documents on the web have been created due to the rapid growth of ’social applications’ communities or ’Data-intensive applications’. The evolution of online-based multimedia data poses new challenges in storing and querying large amounts of data for online recommender systems. Graph data models have been shown to be more efficient than relational data models for processing complex data. This paper will explain the key differences between graph and relational databases, their strengths and weaknesses, and why using graph databases is the best technology for building a realtime recommendation system. Also, The paper will discuss several similarity metrics algorithms that can be used to compute a similarity score of pairs of nodes based on their neighbourhoods or their properties. Finally, the paper will discover how NLP strategies offer the premise to improve the accuracy and coverage of realtime recommendations by extracting the information from the stored unstructured knowledge, which makes up the bulk of the world’s data to enrich the graph database with this information. As the size and number of data items are increasing rapidly, the proposed system should meet current and future needs.Keywords: graph databases, NLP, recommendation systems, similarity metrics
Procedia PDF Downloads 1044974 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1004973 Efficient Storage and Intelligent Retrieval of Multimedia Streams Using H. 265
Authors: S. Sarumathi, C. Deepadharani, Garimella Archana, S. Dakshayani, D. Logeshwaran, D. Jayakumar, Vijayarangan Natarajan
Abstract:
The need of the hour for the customers who use a dial-up or a low broadband connection for their internet services is to access HD video data. This can be achieved by developing a new video format using H. 265. This is the latest video codec standard developed by ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG) on April 2013. This new standard for video compression has the potential to deliver higher performance than the earlier standards such as H. 264/AVC. In comparison with H. 264, HEVC offers a clearer, higher quality image at half the original bitrate. At this lower bitrate, it is possible to transmit high definition videos using low bandwidth. It doubles the data compression ratio supporting 8K Ultra HD and resolutions up to 8192×4320. In the proposed model, we design a new video format which supports this H. 265 standard. The major areas of applications in the coming future would lead to enhancements in the performance level of digital television like Tata Sky and Sun Direct, BluRay Discs, Mobile Video, Video Conferencing and Internet and Live Video streaming.Keywords: access HD video, H. 265 video standard, high performance, high quality image, low bandwidth, new video format, video streaming applications
Procedia PDF Downloads 3544972 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources
Authors: Mustafa Alhamdi
Abstract:
Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification
Procedia PDF Downloads 1504971 Mapping Feature Models to Code Using a Reference Architecture: A Case Study
Authors: Karam Ignaim, Joao M. Fernandes, Andre L. Ferreira
Abstract:
Mapping the artifacts coming from a set of similar products family developed in an ad-hoc manner to make up the resulting software product line (SPL) plays a key role to maintain the consistency between requirements and code. This paper presents a feature mapping approach that focuses on tracing the artifact coming from the migration process, the current feature model (FM), to the other artifacts of the resulting SPL, the reference architecture, and code. Thus, our approach relates each feature of the current FM to its locations in the implementation code, using the reference architecture as an intermediate artifact (as a centric point) to preserve consistency among them during an SPL evolution. The approach uses a particular artifact (i.e., traceability tree) as a solution for managing the mapping process. Tool support is provided using friendlyMapper. We have evaluated the feature mapping approach and tool support by putting the approach into practice (i.e., conducting a case study) of the automotive domain for Classical Sensor Variants Family at Bosch Car Multimedia S.A. The evaluation reveals that the mapping approach presented by this paper fits the automotive domain.Keywords: feature location, feature models, mapping, software product lines, traceability
Procedia PDF Downloads 1274970 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran
Authors: Saba Gachpaz, Hamid Reza Heidari
Abstract:
The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. To achieve this, more resources should be consumed and, besides other environmental concerns, highlight sustainable agricultural development. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for ten different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.Keywords: land suitability, machine learning, random forest, sustainable agriculture
Procedia PDF Downloads 844969 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 904968 Community Arts-Based Learning for Interdisciplinary Pedagogy: Measuring Program Effectiveness Using Design Imperatives for 'a New American University'
Authors: Kevin R. Wilson, Roger Mantie
Abstract:
Community arts-based learning and participatory education are pedagogical techniques that serve to be advantageous for students, curriculum development, and local communities. Using an interpretive approach to examine the significance of this arts-informed research in relation to the eight ‘design imperatives’ proposed as the new model for measuring quality in scholarship for Arizona State University as ‘A New American University’, the purpose of this study was to investigate personal, social, and cultural benefits resulting from student engagement in interdisciplinary community-based projects. Students from a graduate level music education class at the ASU Tempe campus (n=7) teamed with students from an undergraduate level community development class at the ASU Downtown Phoenix campus (n=14) to plan, facilitate, and evaluate seven community-based projects in several locations around the Phoenix-metro area. Data was collected using photo evidence, student reports, and evaluative measures designed by the students. The effectiveness of each project was measured in terms of their ability to meet the eight design imperatives to: 1) leverage place; 2) transform society; 3) value entrepreneurship; 4) conduct use-inspired research; 5) enable student success; 6) fuse intellectual disciplines; 7) be socially embedded; and 8) engage globally. Results indicated that this community arts-based project sufficiently captured the essence of each of these eight imperatives. Implications for how the nature of this interdisciplinary initiative allowed for the eight imperatives to manifest are provided, and project success is expounded upon in relation to utility of each imperative. Discussion is also given for how this type of service learning project formatted within the ‘New American University’ model for measuring quality in academia can be a beneficial pedagogical tool in higher education.Keywords: community arts-based learning, participatory education, pedagogy, service learning
Procedia PDF Downloads 401