Search results for: machine learning tools and techniques
15103 Eye Tracking: Biometric Evaluations of Instructional Materials for Improved Learning
Authors: Janet Holland
Abstract:
Eye tracking is a great way to triangulate multiple data sources for deeper, more complete knowledge of how instructional materials are really being used and emotional connections made. Using sensor based biometrics provides a detailed local analysis in real time expanding our ability to collect science based data for a more comprehensive level of understanding, not previously possible, for teaching and learning. The knowledge gained will be used to make future improvements to instructional materials, tools, and interactions. The literature has been examined and a preliminary pilot test was implemented to develop a methodology for research in Instructional Design and Technology. Eye tracking now offers the addition of objective metrics obtained from eye tracking and other biometric data collection with analysis for a fresh perspective.Keywords: area of interest, eye tracking, biometrics, fixation, fixation count, fixation sequence, fixation time, gaze points, heat map, saccades, time to first fixation
Procedia PDF Downloads 13115102 Using Equipment Telemetry Data for Condition-Based maintenance decisions
Authors: John Q. Todd
Abstract:
Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.Keywords: condition based maintenance, equipment data, metrics, alerts
Procedia PDF Downloads 18815101 Comparative Study of Traditional Classroom Learning and Distance Learning in Pakistan
Authors: Muhammad Afzal Malik
Abstract:
Traditional Learning & Distance based learning are the two systems prevailing in Pakistan. These systems affect the level of education standard. The purpose of this study was to compare the traditional classroom learning and distance learning in Pakistan: (a) To explore the effectiveness of the traditional to Distance learning in Pakistan; (b) To identify the factors that affect traditional and distance learning. This review found that, on average, students in traditional classroom conditions performed better than those receiving education in and distance learning. The difference between student outcomes for traditional Classroom and distance learning classes —measured as the difference between treatment and control means, divided by the pooled standard deviation— was larger in those studies contrasting conditions that blended elements of online and face-to-face instruction with conditions taught entirely face-to-face. This research was conducted to highlight the impact of distance learning education system on education standard. The education standards were institutional support, course development, learning process, student support, faculty support, evaluation and assessment. A well developed questionnaire was administered and distributed among 26 faculty members of GCET, H-9 and Virtual University of Pakistan from each. Data was analyzed through correlation and regression analysis. Results confirmed that there is a significant relationship and impact of DLE system on education standards. This will also provide baseline for future research. It will add value to the existing body of knowledge.Keywords: distance learning education, higher education, education standards, student performance
Procedia PDF Downloads 28015100 Autonomous Quantum Competitive Learning
Authors: Mohammed A. Zidan, Alaa Sagheer, Nasser Metwally
Abstract:
Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches.Keywords: competitive learning, quantum gates, quantum gates, winner-take-all
Procedia PDF Downloads 47215099 Creative Thinking through Mindful Practices: A Business Class Case Study
Authors: Malavika Sundararajan
Abstract:
This study introduces the use of mindfulness techniques in the classroom to make individuals aware of how the creative thinking process works, resulting in more constructive learning and application. Case observation method was utilized within a classroom setting in a graduate class in the Business School. It entailed, briefing the student participants about the use of a template called the dots and depths map, and having them complete it for themselves, compare it to their team members and reflect on the outputs. Finally, they were debriefed about the use of the template and its value to their learning and creative application process. The major finding is the increase in awareness levels of the participants following the use of the template, leading to a subsequent pursuit of diverse knowledge and acquisition of relevant information and not jumping to solutions directly, which increased their overall creative outputs for the given assignment. The significant value of this study is that it can be applied to any classroom on any subject as a powerful mindfulness tool which increases creative problem solving through constructive knowledge building.Keywords: connecting dots, mindful awareness, constructive knowledge building, learning creatively
Procedia PDF Downloads 14915098 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 8615097 Improvement of Students’ Active Experience through the Provision of Foundational Architecture Pedagogy by Virtual Reality Tools
Authors: Mehdi Khakzand, Flora Fakourian
Abstract:
It has been seen in recent years that architects are using virtual modeling to help them visualize their projects. Research has indicated that virtual media, particularly virtual reality, enhances architects' comprehension of design and spatial perception. Creating a communal experience for active learning is an essential component of the design process in architecture pedagogy. It has been particularly challenging to replicate design principles as a critical teaching function, and this is a complex issue that demands comprehension. Nonetheless, the usage of simulation should be studied and limited as appropriate. In conjunction with extensive technology, 3D geometric illustration can bridge the gap between the real and virtual worlds. This research intends to deliver a pedagogical experience in the architecture basics course to improve the architectural design process utilizing virtual reality tools. This tool seeks to tackle current challenges in current ways of architectural illustration by offering building geometry illustration, building information (data from the building information model), and simulation results. These tools were tested over three days in a design workshop with 12 architectural students. This article provided an architectural VR-based course and explored its application in boosting students' active experiences. According to the research, this technology can improve students' cognitive skills from challenging simulations by boosting visual understanding.Keywords: active experience, architecture pedagogy, virtual reality, spatial perception
Procedia PDF Downloads 8715096 Automated Java Testing: JUnit versus AspectJ
Authors: Manish Jain, Dinesh Gopalani
Abstract:
Growing dependency of mankind on software technology increases the need for thorough testing of the software applications and automated testing techniques that support testing activities. We have outlined our testing strategy for performing various types of automated testing of Java applications using AspectJ which has become the de-facto standard for Aspect Oriented Programming (AOP). Likewise JUnit, a unit testing framework is the most popular Java testing tool. In this paper, we have evaluated our proposed AOP approach for automated testing and JUnit on various parameters. First we have provided the similarity between the two approaches and then we have done a detailed comparison of the two testing techniques on factors like lines of testing code, learning curve, testing of private members etc. We established that our AOP testing approach using AspectJ has got several advantages and is thus particularly more effective than JUnit.Keywords: aspect oriented programming, AspectJ, aspects, JU-nit, software testing
Procedia PDF Downloads 33115095 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms
Authors: S. Nandagopalan, N. Pradeep
Abstract:
The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.Keywords: active contour, bayesian, echocardiographic image, feature vector
Procedia PDF Downloads 42015094 Early Installation Effect on the Machines’ Generated Vibration
Authors: Maitham Al-Safwani
Abstract:
Motor vibration issues were analyzed by several studies. It is generally accepted that vibration issues result from poor equipment installation. We had a water injection pump tested in the factory and exceeded the pump the vibration limit. Once the pump was brought to the site, its half-size shim plates were replaced with full-size shims plates that drastically reduced the vibration. In this study, vibration data was recorded for several similar motors run at the same and different speeds. The vibration values were recorded -for two and a half hours- and the vibration readings were analyzed to determine when the readings became consistent. This was as well supported by recording the audio noises produced by some machines seeking a relationship between changes in machine noises and machine abnormalities, such as vibration.Keywords: vibration, noise, installation, machine
Procedia PDF Downloads 18315093 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents
Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty
Abstract:
A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.Keywords: abstractive summarization, deep learning, natural language Processing, patent document
Procedia PDF Downloads 12315092 Electrocardiogram-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline Rose T. Alipo-on, Francesca Isabelle F. Escobar, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar Al Dahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases, which are considered one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis of ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heartbeat types. The dataset used in this work is the synthetic MIT-BIH Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.Keywords: heartbeat classification, convolutional neural network, electrocardiogram signals, generative adversarial networks, long short-term memory, ResNet-50
Procedia PDF Downloads 12815091 Impact of Knowledge Management on Learning Organizations
Authors: Gunmala Suri
Abstract:
The purpose of this study was to investigate the relationship between various dimensions of Knowledge Management and Learning Organizations. On the basis of the dimensions of Learning Organization, Hypothesis were formulated. Knowledge Management (KM) is taken as the independent variable and Learning Organization (LO) as a dependent variable. KM had 5 dimensions and LO had 7. For this study, a total of 92 participants took part and answered the questionnaire. The respondents were selected using Judgemental and Snowball sampling. The respondents were from SMEs in and around Chandigarh. SPSS was used to for the data analysis purposes. The results showed that the dimensions of KM had a positive influence on the dimensions of LO. The hypothesis were accepted.Keywords: knowledge management leadership, knowledge management, learning organization, knowledge management culture
Procedia PDF Downloads 41815090 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 7415089 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 11415088 A Study on Pre-Service English Teachers' Language Self Efficacy and Learning Goal Orientation
Authors: Erteki̇n Kotbaş
Abstract:
Teaching English as a Foreign Language (EFL) is on the front burner of many countries in the world, in particular for English language teaching departments that train EFL teachers. Under the head of motivational theories in foreign language education, there are numerous researches in literature. However; researches comprising English language self-efficacy and teachers’ learning goal orientation which has a positive impact on learning teachings skills are scarce. Examination of these English language self-efficacy beliefs and learning goal orientations of pre-service EFL teachers may broaden the horizons, considering the importance of self-efficacy and goal orientation on learning and teaching activities. At this juncture, present study aims to investigate the strong relationship between English language self efficacy and teachers’ learning goal orientation from Turkish context in addition to teacher students’ grade factor.Keywords: English language, learning goal orientation, self efficacy, pre-service teachers
Procedia PDF Downloads 46215087 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support
Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz
Abstract:
The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.
Procedia PDF Downloads 12715086 The Impact of Culture in Teaching English, the Case Study of Preparatory School of Sciences and Techniques
Authors: Nouzha Yasmina Soulimane-Benhabib
Abstract:
Language is a medium of communication and a means of expression that is why today the learning of foreign languages especially the English language has become a basic necessity for every student who is ambitious. It is known that culture and language are inseparable and complementary, however, in the process of teaching a foreign language, teachers used to focus mainly on preparing adequate syllabi for ESP students, yet, some parameters should be considered. For instance; the culture of the target language may play an important role since students attitudes towards a foreign language enhance their learning or vice versa. The aim of this study is to analyse how culture could influence the teaching of a foreign language, we have taken the example of the English language as it is considered as the second foreign language in Algeria after French. The study is conducted at the Preparatory School of Sciences and Techniques, Tlemcen where twenty-five students participated in this research. The reasons behind learning the English language are various, and since English is the most widely-spoken language in the world, it is the language of research and education and it is used in many other fields, we have to take into consideration one important factor which is the social distance between the culture of the Algerian learner and the culture of the target language, this gap may lead to a culture shock. Two steps are followed in this research: The first one is to collect data from those students who are studying at the Preparatory School under the form of questionnaire and an interview is submitted to six of them in order to reinforce our research and get effective and precise results, and the second step is to analyse these data taking into consideration the diversity of the learners within this institution. The results obtained show that learners’ attitudes towards the English community and culture are mixed and it may influence their curiosity and attention to learn. Despite of big variance between Algerian and European cultures, some of the students focused mainly on the benefits of the English language since they need it in their studies, research and a future carrier, however, the others manifest their reluctance towards this language and this is mainly due to the profound impact of the English culture which is different from the Algerian one.Keywords: Algeria, culture, English, impact
Procedia PDF Downloads 38815085 A Framework for Teaching Distributed Requirements Engineering in Latin American Universities
Authors: G. Sevilla, S. Zapata, F. Giraldo, E. Torres, C. Collazos
Abstract:
This work describes a framework for teaching of global software engineering (GSE) in university undergraduate programs. This framework proposes a method of teaching that incorporates adequate techniques of software requirements elicitation and validated tools of communication, critical aspects to global software development scenarios. The use of proposed framework allows teachers to simulate small software development companies formed by Latin American students, which build information systems. Students from three Latin American universities played the roles of engineers by applying an iterative development of a requirements specification in a global software project. The proposed framework involves the use of a specific purpose Wiki for asynchronous communication between the participants of the process. It is also a practice to improve the quality of software requirements that are formulated by the students. The additional motivation of students to participate in these practices, in conjunction with peers from other countries, is a significant additional factor that positively contributes to the learning process. The framework promotes skills for communication, negotiation, and other complementary competencies that are useful for working on GSE scenarios.Keywords: requirements analysis, distributed requirements engineering, practical experiences, collaborative support
Procedia PDF Downloads 20415084 Disruptions to Medical Education during COVID-19: Perceptions and Recommendations from Students at the University of the West, Indies, Jamaica
Authors: Charléa M. Smith, Raiden L. Schodowski, Arletty Pinel
Abstract:
Due to the COVID-19 pandemic, the Faculty of Medical Sciences of The University of the West Indies (UWI) Mona in Kingston, Jamaica, had to rapidly migrate to digital and blended learning. Students in the preclinical stage of the program transitioned to full-time online learning, while students in the clinical stage experienced decreased daily patient contact and the implementation of a blend of online lectures and virtual clinical practice. Such sudden changes were coupled with the institutional pressure of the need to introduce a novel approach to education without much time for preparation, as well as additional strain endured by the faculty, who were overwhelmed by serving as frontline workers. During the period July 20 to August 23, 2021, this study surveyed preclinical and clinical students to capture their experiences with these changes and their recommendations for future use of digital modalities of learning to enhance medical education. It was conducted with a fellow student of the 2021 cohort of the MultiPod mentoring program. A questionnaire was developed and distributed digitally via WhatsApp to all medical students of the UWI Mona campus to assess students’ experiences and perceptions of the advantages, challenges, and impact on individual knowledge proficiencies brought about by the transition to predominantly digital learning environments. 108 students replied, 53.7% preclinical and 46.3% clinical. 67.6% of the total were female and 30.6 % were male; 1.8% did not identify themselves by gender. 67.2% of preclinical students preferred blended learning and 60.3% considered that the content presented did not prepare them for clinical work. Only 31% considered that the online classes were interactive and encouraged student participation. 84.5% missed socialization with classmates and friends and 79.3% missed a focused environment for learning. 80% of the clinical students felt that they had not learned all that they expected and only 34% had virtual interaction with patients, mostly by telephone and video calls. Observing direct consultations was considered the most useful, yet this was the least-used modality. 96% of the preclinical students and 100% of the clinical ones supplemented their learning with additional online tools. The main recommendations from the survey are the use of interactive teaching strategies, more discussion time with lecturers, and increased virtual interactions with patients. Universities are returning to face-to-face learning, yet it is unlikely that blended education will disappear. This study demonstrates that students’ perceptions of their experience during mobility restrictions must be taken into consideration in creating more effective, inclusive, and efficient blended learning opportunities.Keywords: blended learning, digital learning, medical education, student perceptions
Procedia PDF Downloads 16615083 Analytical Study of Data Mining Techniques for Software Quality Assurance
Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar
Abstract:
Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.Keywords: data mining, defect prediction, missing requirements, software quality
Procedia PDF Downloads 46815082 The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level
Authors: Siti Ayu Ningsih
Abstract:
This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references.Keywords: Indonesian language for foreign speaker, learning outcome, media, reading comprehension
Procedia PDF Downloads 19715081 Quality Assurance in Cardiac Disorder Detection Images
Authors: Anam Naveed, Asma Andleeb, Mehreen Sirshar
Abstract:
In the article, Image processing techniques have been applied on cardiac images for enhancing the image quality. Two types of methodologies considers for survey, invasive techniques and non-invasive techniques. Different image processes for improvement of cardiac image quality and reduce the amount of radiation exposure for invasive techniques are explored. Different image processing algorithms for enhancing the noninvasive cardiac image qualities are described. Beside these two methodologies, third methodology has applied on live streaming of heart rate on ECG window for extracting necessary information, removing noise and enhancing quality. Sensitivity analyses have been carried out to investigate the impacts of cardiac images for diagnosis of cardiac arteries disease and how the enhancement on images will help the cardiologist to diagnoses disease. The paper evaluates strengths and weaknesses of different techniques applied for improved the image quality and draw a conclusion. Some specific limitations must be considered for whole survey, like the patient heart beat must be 70-75 beats/minute while doing the angiography, similarly patient weight and exposure radiation amount has some limitation.Keywords: cardiac images, CT angiography, critical analysis, exposure radiation, invasive techniques, invasive techniques, non-invasive techniques
Procedia PDF Downloads 35215080 Understanding the Heart of the Matter: A Pedagogical Framework for Apprehending Successful Second Language Development
Authors: Cinthya Olivares Garita
Abstract:
Untangling language processing in second language development has been either a taken-for-granted and overlooked task for some English language teaching (ELT) instructors or a considerable feat for others. From the most traditional language instruction to the most communicative methodologies, how to assist L2 learners in processing language in the classroom has become a challenging matter in second language teaching. Amidst an ample array of methods, strategies, and techniques to teach a target language, finding a suitable model to lead learners to process, interpret, and negotiate meaning to communicate in a second language has imposed a great responsibility on language teachers; committed teachers are those who are aware of their role in equipping learners with the appropriate tools to communicate in the target language in a 21stcentury society. Unfortunately, one might find some English language teachers convinced that their job is only to lecture students; others are advocates of textbook-based instruction that might hinder second language processing, and just a few might courageously struggle to facilitate second language learning effectively. Grounded on the most representative empirical studies on comprehensible input, processing instruction, and focus on form, this analysis aims to facilitate the understanding of how second language learners process and automatize input and propose a pedagogical framework for the successful development of a second language. In light of this, this paper is structured to tackle noticing and attention and structured input as the heart of processing instruction, comprehensible input as the missing link in second language learning, and form-meaning connections as opposed to traditional grammar approaches to language teaching. The author finishes by suggesting a pedagogical framework involving noticing-attention-comprehensible-input-form (NACIF based on their acronym) to support ELT instructors, teachers, and scholars on the challenging task of facilitating the understanding of effective second language development.Keywords: second language development, pedagogical framework, noticing, attention, comprehensible input, form
Procedia PDF Downloads 2915079 Block N Lvi from the Northern Side of Parthenon Frieze: A Case Study of Augmented Reality for Museum Application
Authors: Donato Maniello, Alessandra Cirafici, Valeria Amoretti
Abstract:
This paper aims to present a new method that consists in the use of video mapping techniques – that is a particular form of augmented reality, which could produce new tools - different from the ones that are actually in use - for an interactive Museum experience. With the words 'augmented reality', we mean the addition of more information than what the visitor would normally perceive; this information is mediated by the use of computer and projector. The proposed application involves the creation of a documentary that depicts and explains the history of the artifact and illustrates its features; this must be projected on the surface of the faithful copy of the freeze (obtained in full-scale with a 3D printer). This mode of operation uses different techniques that allow passing from the creation of the model to the creation of contents through an accurate historical and artistic analysis, and finally to the warping phase, that will permit to overlap real and virtual models. The ultimate step, that is still being studied, includes the creation of interactive contents that would be activated by visitors through appropriate motion sensors.Keywords: augmented reality, multimedia, parthenon frieze, video mapping
Procedia PDF Downloads 38715078 A Survey of Response Generation of Dialogue Systems
Authors: Yifan Fan, Xudong Luo, Pingping Lin
Abstract:
An essential task in the field of artificial intelligence is to allow computers to interact with people through natural language. Therefore, researches such as virtual assistants and dialogue systems have received widespread attention from industry and academia. The response generation plays a crucial role in dialogue systems, so to push forward the research on this topic, this paper surveys various methods for response generation. We sort out these methods into three categories. First one includes finite state machine methods, framework methods, and instance methods. The second contains full-text indexing methods, ontology methods, vast knowledge base method, and some other methods. The third covers retrieval methods and generative methods. We also discuss some hybrid methods based knowledge and deep learning. We compare their disadvantages and advantages and point out in which ways these studies can be improved further. Our discussion covers some studies published in leading conferences such as IJCAI and AAAI in recent years.Keywords: deep learning, generative, knowledge, response generation, retrieval
Procedia PDF Downloads 13415077 Leveraging Multimodal Neuroimaging Techniques to in vivo Address Compensatory and Disintegration Patterns in Neurodegenerative Disorders: Evidence from Cortico-Cerebellar Connections in Multiple Sclerosis
Authors: Efstratios Karavasilis, Foteini Christidi, Georgios Velonakis, Agapi Plousi, Kalliopi Platoni, Nikolaos Kelekis, Ioannis Evdokimidis, Efstathios Efstathopoulos
Abstract:
Introduction: Advanced structural and functional neuroimaging techniques contribute to the study of anatomical and functional brain connectivity and its role in the pathophysiology and symptoms’ heterogeneity in several neurodegenerative disorders, including multiple sclerosis (MS). Aim: In the present study, we applied multiparametric neuroimaging techniques to investigate the structural and functional cortico-cerebellar changes in MS patients. Material: We included 51 MS patients (28 with clinically isolated syndrome [CIS], 31 with relapsing-remitting MS [RRMS]) and 51 age- and gender-matched healthy controls (HC) who underwent MRI in a 3.0T MRI scanner. Methodology: The acquisition protocol included high-resolution 3D T1 weighted, diffusion-weighted imaging and echo planar imaging sequences for the analysis of volumetric, tractography and functional resting state data, respectively. We performed between-group comparisons (CIS, RRMS, HC) using CAT12 and CONN16 MATLAB toolboxes for the analysis of volumetric (cerebellar gray matter density) and functional (cortico-cerebellar resting-state functional connectivity) data, respectively. Brainance suite was used for the analysis of tractography data (cortico-cerebellar white matter integrity; fractional anisotropy [FA]; axial and radial diffusivity [AD; RD]) to reconstruct the cerebellum tracts. Results: Patients with CIS did not show significant gray matter (GM) density differences compared with HC. However, they showed decreased FA and increased diffusivity measures in cortico-cerebellar tracts, and increased cortico-cerebellar functional connectivity. Patients with RRMS showed decreased GM density in cerebellar regions, decreased FA and increased diffusivity measures in cortico-cerebellar WM tracts, as well as a pattern of increased and mostly decreased functional cortico-cerebellar connectivity compared to HC. The comparison between CIS and RRMS patients revealed significant GM density difference, reduced FA and increased diffusivity measures in WM cortico-cerebellar tracts and increased/decreased functional connectivity. The identification of decreased WM integrity and increased functional cortico-cerebellar connectivity without GM changes in CIS and the pattern of decreased GM density decreased WM integrity and mostly decreased functional connectivity in RRMS patients emphasizes the role of compensatory mechanisms in early disease stages and the disintegration of structural and functional networks with disease progression. Conclusions: In conclusion, our study highlights the added value of multimodal neuroimaging techniques for the in vivo investigation of cortico-cerebellar brain changes in neurodegenerative disorders. An extension and future opportunity to leverage multimodal neuroimaging data inevitably remain the integration of such data in the recently-applied mathematical approaches of machine learning algorithms to more accurately classify and predict patients’ disease course.Keywords: advanced neuroimaging techniques, cerebellum, MRI, multiple sclerosis
Procedia PDF Downloads 14015076 Assessment of Procurement-Demand of Milk Plant Using Quality Control Tools: A Case Study
Authors: Jagdeep Singh, Prem Singh
Abstract:
Milk is considered as an essential and complete food. The present study was conducted at Milk Plant Mohali especially in reference to the procurement section where the cash inflow was maximum, with the objective to achieve higher productivity and reduce wastage of milk. In milk plant it was observed that during the month of Jan-2014 to March-2014 the average procurement of milk was Rs. 4, 19, 361 liter per month and cost of procurement of milk is Rs 35/- per liter. The total cost of procurement thereby equal to Rs. 1crore 46 lakh per month, but there was mismatch in procurement-production of milk, which leads to an average loss of Rs. 12, 94, 405 per month. To solve the procurement-production problem Quality Control Tools like brainstorming, Flow Chart, Cause effect diagram and Pareto analysis are applied wherever applicable. With the successful implementation of Quality Control tools an average saving of Rs. 4, 59, 445 per month is done.Keywords: milk, procurement-demand, quality control tools,
Procedia PDF Downloads 53215075 Collaboration of Game Based Learning with Models Roaming the Stairs Using the Tajribi Method on the Eye PAI Lessons at the Ummul Mukminin Islamic Boarding School, Makassar South Sulawesi
Authors: Ratna Wulandari, Shahidin
Abstract:
This article aims to see how the Game Based Learning learning model with the Roaming The Stairs game makes a tajribi method can make PAI lessons active and interactive learning. This research uses a qualitative approach with a case study type of research. Data collection methods were carried out using interviews, observation, and documentation. Data analysis was carried out through the stages of data reduction, data display, and verification and drawing conclusions. The data validity test was carried out using the triangulation method. and drawing conclusions. The results of the research show that (1) children in grades 9A, 9B, and 9C like learning PAI using the Roaming The Stairs game (2) children in grades 9A, 9B, and 9C are active and can work in groups to solve problems in the Roaming The Stairs game (3) the class atmosphere becomes fun with learning method, namely learning while playing.Keywords: game based learning, Roaming The Stairs, Tajribi PAI
Procedia PDF Downloads 2215074 A Deep Learning Based Approach for Dynamically Selecting Pre-processing Technique for Images
Authors: Revoti Prasad Bora, Nikita Katyal, Saurabh Yadav
Abstract:
Pre-processing plays an important role in various image processing applications. Most of the time due to the similar nature of images, a particular pre-processing or a set of pre-processing steps are sufficient to produce the desired results. However, in the education domain, there is a wide variety of images in various aspects like images with line-based diagrams, chemical formulas, mathematical equations, etc. Hence a single pre-processing or a set of pre-processing steps may not yield good results. Therefore, a Deep Learning based approach for dynamically selecting a relevant pre-processing technique for each image is proposed. The proposed method works as a classifier to detect hidden patterns in the images and predicts the relevant pre-processing technique needed for the image. This approach experimented for an image similarity matching problem but it can be adapted to other use cases too. Experimental results showed significant improvement in average similarity ranking with the proposed method as opposed to static pre-processing techniques.Keywords: deep-learning, classification, pre-processing, computer vision, image processing, educational data mining
Procedia PDF Downloads 163