Search results for: deep learning network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12020

Search results for: deep learning network

11090 Project and Module Based Teaching and Learning

Authors: Jingyu Hou

Abstract:

This paper proposes a new teaching and learning approach-project and Module Based Teaching and Learning (PMBTL). The PMBTL approach incorporates the merits of project/problem based and module based learning methods, and overcomes the limitations of these methods. The correlation between teaching, learning, practice, and assessment is emphasized in this approach, and new methods have been proposed accordingly. The distinct features of these new methods differentiate the PMBTL approach from conventional teaching approaches. Evaluation of this approach on practical teaching and learning activities demonstrates the effectiveness and stability of the approach in improving the performance and quality of teaching and learning. The approach proposed in this paper is also intuitive to the design of other teaching units.

Keywords: computer science education, project and module based, software engineering, module based teaching and learning

Procedia PDF Downloads 487
11089 State of the Art on the Recommendation Techniques of Mobile Learning Activities

Authors: Nassim Dennouni, Yvan Peter, Luigi Lancieri, Zohra Slama

Abstract:

The objective of this article is to make a bibliographic study on the recommendation of mobile learning activities that are used as part of the field trip scenarios. Indeed, the recommendation systems are widely used in the context of mobility because they can be used to provide learning activities. These systems should take into account the history of visits and teacher pedagogy to provide adaptive learning according to the instantaneous position of the learner. To achieve this objective, we review the existing literature on field trip scenarios to recommend mobile learning activities.

Keywords: mobile learning, field trip, mobile learning activities, collaborative filtering, recommendation system, point of interest, ACO algorithm

Procedia PDF Downloads 441
11088 Empowering Transformers for Evidence-Based Medicine

Authors: Jinan Fiaidhi, Hashmath Shaik

Abstract:

Breaking the barrier for practicing evidence-based medicine relies on effective methods for rapidly identifying relevant evidence from the body of biomedical literature. An important challenge confronted by medical practitioners is the long time needed to browse, filter, summarize and compile information from different medical resources. Deep learning can help in solving this based on automatic question answering (Q&A) and transformers. However, Q&A and transformer technologies are not trained to answer clinical queries that can be used for evidence-based practice, nor can they respond to structured clinical questioning protocols like PICO (Patient/Problem, Intervention, Comparison and Outcome). This article describes the use of deep learning techniques for Q&A that are based on transformer models like BERT and GPT to answer PICO clinical questions that can be used for evidence-based practice extracted from sound medical research resources like PubMed. We are reporting acceptable clinical answers that are supported by findings from PubMed. Our transformer methods are reaching an acceptable state-of-the-art performance based on two staged bootstrapping processes involving filtering relevant articles followed by identifying articles that support the requested outcome expressed by the PICO question. Moreover, we are also reporting experimentations to empower our bootstrapping techniques with patch attention to the most important keywords in the clinical case and the PICO questions. Our bootstrapped patched with attention is showing relevancy of the evidence collected based on entropy metrics.

Keywords: automatic question answering, PICO questions, evidence-based medicine, generative models, LLM transformers

Procedia PDF Downloads 35
11087 AI for Efficient Geothermal Exploration and Utilization

Authors: Velimir "monty" Vesselinov, Trais Kliplhuis, Hope Jasperson

Abstract:

Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.

Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal

Procedia PDF Downloads 44
11086 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: biometric characters, facial recognition, neural network, OpenCV

Procedia PDF Downloads 250
11085 Monitoring the Effect of Deep Frying and the Type of Food on the Quality of Oil

Authors: Omar Masaud Almrhag, Frage Lhadi Abookleesh

Abstract:

Different types of food like banana, potato and chicken affect the quality of oil during deep fat frying. The changes in the quality of oil were evaluated and compared. Four different types of edible oils, namely, corn oil, soybean, canola, and palm oil were used for deep fat frying at 180°C ± 5°C for 5 h/d for six consecutive days. A potato was sliced into 7-8 cm length wedges and chicken was cut into uniform pieces of 100 g each. The parameters used to assess the quality of oil were total polar compound (TPC), iodine value (IV), specific extinction E1% at 233 nm and 269 nm, fatty acid composition (FAC), free fatty acids (FFA), viscosity (cp) and changes in the thermal properties. Results showed that, TPC, IV, FAC, Viscosity (cp) and FFA composition changed significantly with time (P< 0.05) and type of food. Significant differences (P< 0.05) were noted for the used parameters during frying of the above mentioned three products.

Keywords: frying potato, chicken, frying deterioration, quality of oil

Procedia PDF Downloads 416
11084 Using the Dokeos Platform for Industrial E-Learning Solution

Authors: Kherafa Abdennasser

Abstract:

The application of Information and Communication Technologies (ICT) to the training area led to the creation of this new reality called E-learning. That last one is described like the marriage of multi- media (sound, image and text) and of the internet (diffusion on line, interactivity). Distance learning became an important totality for training and that last pass in particular by the setup of a distance learning platform. In our memory, we will use an open source platform named Dokeos for the management of a distance training of GPS called e-GPS. The learner is followed in all his training. In this system, trainers and learners communicate individually or in group, the administrator setup and make sure of this system maintenance.

Keywords: ICT, E-learning, learning plate-forme, Dokeos, GPS

Procedia PDF Downloads 474
11083 Quality and Quantity in the Strategic Network of Higher Education Institutions

Authors: Juha Kettunen

Abstract:

The study analyzes the quality and the size of the strategic network of higher education institutions and the concept of fitness for purpose in quality assurance. It also analyses the transaction costs of networking that have consequences on the number of members in the network. Empirical evidence is presented from the Consortium on Applied Research and Professional Education, which is a European strategic network of six higher education institutions. The results of the study support the argument that the number of members in the strategic network should be relatively small to provide high-quality results. The practical importance is that networking has been able to promote international research and development projects. The results of this study are important for those who want to design and improve international networks in higher education.

Keywords: higher education, network, research and development, strategic management

Procedia PDF Downloads 341
11082 Energy Efficient Firefly Algorithm in Wireless Sensor Network

Authors: Wafa’ Alsharafat, Khalid Batiha, Alaa Kassab

Abstract:

Wireless sensor network (WSN) is comprised of a huge number of small and cheap devices known as sensor nodes. Usually, these sensor nodes are massively and deployed randomly as in Ad-hoc over hostile and harsh environment to sense, collect and transmit data to the needed locations (i.e., base station). One of the main advantages of WSN is that the ability to work in unattended and scattered environments regardless the presence of humans such as remote active volcanoes environments or earthquakes. In WSN expanding network, lifetime is a major concern. Clustering technique is more important to maximize network lifetime. Nature-inspired algorithms are developed and optimized to find optimized solutions for various optimization problems. We proposed Energy Efficient Firefly Algorithm to improve network lifetime as long as possible.

Keywords: wireless network, SN, Firefly, energy efficiency

Procedia PDF Downloads 387
11081 An Analysis of a Canadian Personalized Learning Curriculum

Authors: Ruthanne Tobin

Abstract:

The shift to a personalized learning (PL) curriculum in Canada represents an innovative approach to teaching and learning that is also evident in various initiatives across the 32-nation OECD. The premise behind PL is that empowering individual learners to have more input into how they access and construct knowledge, and express their understanding of it, will result in more meaningful school experiences and academic success. In this paper presentation, the author reports on a document analysis of the new curriculum in the province of British Columbia. Three theoretical frameworks are used to analyze the new curriculum. Framework 1 focuses on five dominant aspects (FDA) of PL at the classroom level. Framework 2 focuses on conceptualizing and enacting personalized learning (CEPL) within three spheres of influence. Framework 3 focuses on the integration of three types of knowledge (content, technological, and pedagogical). Analysis is ongoing, but preliminary findings suggest that the new curriculum addresses framework 1 quite well, which identifies five areas of personalized learning: 1) assessment for learning; 2) effective teaching and learning; 3) curriculum entitlement (choice); 4) school organization; and 5) “beyond the classroom walls” (learning in the community). Framework 2 appears to be less well developed in the new curriculum. This framework speaks to the dynamics of PL within three spheres of interaction: 1) nested agency, comprised of overarching constraints [and enablers] from policy makers, school administrators and community; 2) relational agency, which refers to a capacity for professionals to develop a network of expertise to serve shared goals; and 3) students’ personalized learning experience, which integrates differentiation with self-regulation strategies. Framework 3 appears to be well executed in the new PL curriculum, as it employs the theoretical model of technological, pedagogical content knowledge (TPACK) in which there are three interdependent bodies of knowledge. Notable within this framework is the emphasis on the pairing of technologies with excellent pedagogies to significantly assist students and teachers. This work will be of high relevance to educators interested in innovative school reform.

Keywords: curriculum reform, K-12 school change, innovations in education, personalized learning

Procedia PDF Downloads 276
11080 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 126
11079 LGG Architecture for Brain Tumor Segmentation Using Convolutional Neural Network

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

The most aggressive form of brain tumor is called glioma. Glioma is kind of tumor that arises from glial tissue of the brain and occurs quite often. A fully automatic 2D-CNN model for brain tumor segmentation is presented in this paper. We performed pre-processing steps to remove noise and intensity variances using N4ITK and standard intensity correction, respectively. We used Keras open-source library with Theano as backend for fast implementation of CNN model. In addition, we used BRATS 2015 MRI dataset to evaluate our proposed model. Furthermore, we have used SimpleITK open-source library in our proposed model to analyze images. Moreover, we have extracted random 2D patches for proposed 2D-CNN model for efficient brain segmentation. Extracting 2D patched instead of 3D due to less dimensional information present in 2D which helps us in reducing computational time. Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.77 for complete, 0.76 for core, 0.77 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, LGG

Procedia PDF Downloads 176
11078 SiamMask++: More Accurate Object Tracking through Layer Wise Aggregation in Visual Object Tracking

Authors: Hyunbin Choi, Jihyeon Noh, Changwon Lim

Abstract:

In this paper, we propose SiamMask++, an architecture that performs layer-wise aggregation and depth-wise cross-correlation and introduce multi-RPN module and multi-MASK module to improve EAO (Expected Average Overlap), a representative performance evaluation metric for Visual Object Tracking (VOT) challenge. The proposed architecture, SiamMask++, has two versions, namely, bi_SiamMask++, which satisfies the real time (56fps) on systems equipped with GPUs (Titan XP), and rf_SiamMask++, which combines mask refinement modules for EAO improvements. Tests are performed on VOT2016, VOT2018 and VOT2019, the representative datasets of Visual Object Tracking tasks labeled as rotated bounding boxes. SiamMask++ perform better than SiamMask on all the three datasets tested. SiamMask++ is achieved performance of 62.6% accuracy, 26.2% robustness and 39.8% EAO, especially on the VOT2018 dataset. Compared to SiamMask, this is an improvement of 4.18%, 37.17%, 23.99%, respectively. In addition, we do an experimental in-depth analysis of how much the introduction of features and multi modules extracted from the backbone affects the performance of our model in the VOT task.

Keywords: visual object tracking, video, deep learning, layer wise aggregation, Siamese network

Procedia PDF Downloads 149
11077 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 61
11076 How to Modernise the European Competition Network (ECN)

Authors: Dorota Galeza

Abstract:

This paper argues that networks, such as the ECN and the American network, are affected by certain small events which are inherent to path dependence and preclude the full evolution towards efficiency. It is advocated that the American network is superior to the ECN in many respects due to its greater flexibility and longer history. This stems in particular from the creation of the American network, which was based on a small number of cases. Such a structure encourages further changes and modifications which are not necessarily radical. The ECN, by contrast, was established by legislative action, which explains its rigid structure and resistance to change. This paper is an attempt to transpose the superiority of the American network on to the ECN. It looks at concepts such as judicial cooperation, harmonisation of procedure, peer review and regulatory impact assessments (RIAs), and dispute resolution procedures.

Keywords: antitrust, competition, networks, path dependence

Procedia PDF Downloads 309
11075 Services-Oriented Model for the Regulation of Learning

Authors: Mohamed Bendahmane, Brahim Elfalaki, Mohammed Benattou

Abstract:

One of the major sources of learners' professional difficulties is their heterogeneity. Whether on cognitive, social, cultural or emotional level, learners being part of the same group have many differences. These differences do not allow to apply the same learning process at all learners. Thus, an optimal learning path for one, is not necessarily the same for the other. We present in this paper a model-oriented service to offer to each learner a personalized learning path to acquire the targeted skills.

Keywords: learning path, web service, trace analysis, personalization

Procedia PDF Downloads 350
11074 Faculty Members' Acceptance of Mobile Learning in Kingdom of Saudi Arabia: Case Study of a Saudi University

Authors: Omran Alharbi

Abstract:

It is difficult to find an aspect of our modern lives that has been untouched by mobile technology. Indeed, the use of mobile learning in Saudi Arabia may enhance students’ learning and increase overall educational standards. However, within tertiary education, the success of e-learning implementation depends on the degree to which students and educators accept mobile learning and are willing to utilise it. Therefore, this research targeted the factors that influence Hail University instructors’ intentions to use mobile learning. An online survey was completed by eighty instructors and it was found that their use of mobile learning was heavily predicted by performance experience, effort expectancy, social influence, and facilitating conditions; the multiple regression analysis revealed that 67% of the variation was accounted for by these variables. From these variables, effort expectancy was shown to be the strongest predictor of intention to use e-learning for instructors.

Keywords: acceptance, faculty member, mobile learning, KSA

Procedia PDF Downloads 149
11073 A Neural Network Classifier for Identifying Duplicate Image Entries in Real-Estate Databases

Authors: Sergey Ermolin, Olga Ermolin

Abstract:

A Deep Convolution Neural Network with Triplet Loss is used to identify duplicate images in real-estate advertisements in the presence of image artifacts such as watermarking, cropping, hue/brightness adjustment, and others. The effects of batch normalization, spatial dropout, and various convergence methodologies on the resulting detection accuracy are discussed. For comparative Return-on-Investment study (per industry request), end-2-end performance is benchmarked on both Nvidia Titan GPUs and Intel’s Xeon CPUs. A new real-estate dataset from San Francisco Bay Area is used for this work. Sufficient duplicate detection accuracy is achieved to supplement other database-grounded methods of duplicate removal. The implemented method is used in a Proof-of-Concept project in the real-estate industry.

Keywords: visual recognition, convolutional neural networks, triplet loss, spatial batch normalization with dropout, duplicate removal, advertisement technologies, performance benchmarking

Procedia PDF Downloads 332
11072 Teaching Professional Competences through Projects: Experiencing Curriculum Development through Active Learning

Authors: Flavio Campos, Patricia Masmo, Fernanda Yamamoto

Abstract:

The report presents a research about teaching professional competencies through projects, considering the student as an active learner and curriculum development. Considering project based-learning, the report articulate the result of research about curriculum development for professional competencies and teaching-learning strategies to help the development of professional competencies in learning environments in the courses of National Learning Service in São Paulo, Brazil. There so, intend to demonstrate fundamentals to elaborate curriculum to learning environment, specific about teaching methodologies to enrich student-learning process, using projects. The practice that has been taking place since 2013 indicates the needs of rethinking knowledge and practice in courses that prepared students to labor.

Keywords: curriculum design, active learning, professional competencies, project based-learning

Procedia PDF Downloads 422
11071 Enhanced Constraint-Based Optical Network (ECON) for Enhancing OSNR

Authors: G. R. Kavitha, T. S. Indumathi

Abstract:

With the constantly rising demands of the multimedia services, the requirements of long haul transport network are constantly changing in the area of optical network. Maximum data transmission using optimization of the communication channel poses the biggest challenge. Although there has been a constant focus on this area from the past decade, there was no evidence of a significant result that has been accomplished. Hence, after reviewing some potential design of optical network from literatures, it was understood that optical signal to noise ratio was one of the elementary attributes that can define the performance of the optical network. In this paper, we propose a framework termed as ECON (Enhanced Constraint-based Optical Network) that primarily optimize the optical signal to noise ratio using ROADM. The simulation is performed in Matlab and optical signal to noise ratio is extracted considering the system matrix. The outcome of the proposed study shows that optimized OSNR as compared to the existing studies.

Keywords: component, optical network, reconfigurable optical add-drop multiplexer, optical signal-to-noise ratio

Procedia PDF Downloads 483
11070 A Proposed Algorithm for Obtaining the Map of Subscribers’ Density Distribution for a Mobile Wireless Communication Network

Authors: C. Temaneh-Nyah, F. A. Phiri, D. Karegeya

Abstract:

This paper presents an algorithm for obtaining the map of subscriber’s density distribution for a mobile wireless communication network based on the actual subscriber's traffic data obtained from the base station. This is useful in statistical characterization of the mobile wireless network.

Keywords: electromagnetic compatibility, statistical analysis, simulation of communication network, subscriber density

Procedia PDF Downloads 307
11069 A Semantic E-Learning and E-Assessment System of Learners

Authors: Wiem Ben Khalifa, Dalila Souilem, Mahmoud Neji

Abstract:

The evolutions of Social Web and Semantic Web lead us to ask ourselves about the way of supporting the personalization of learning by means of intelligent filtering of educational resources published in the digital networks. We recommend personalized courses of learning articulated around a first educational course defined upstream. Resuming the context and the stakes in the personalization, we also suggest anchoring the personalization of learning in a community of interest within a group of learners enrolled in the same training. This reflection is supported by the display of an active and semantic system of learning dedicated to the constitution of personalized to measure courses and in the due time.

Keywords: Semantic Web, semantic system, ontology, evaluation, e-learning

Procedia PDF Downloads 327
11068 Ubiquitous Collaborative Learning Activities with Virtual Teams Using CPS Processes to Develop Creative Thinking and Collaboration Skills

Authors: Sitthichai Laisema, Panita Wannapiroon

Abstract:

This study is a research and development which is intended to: 1) design ubiquitous collaborative learning activities with virtual teams using CPS processes to develop creative thinking and collaboration skills, and 2) assess the suitability of the ubiquitous collaborative learning activities. Its methods are divided into 2 phases. Phase 1 is the design of ubiquitous collaborative learning activities with virtual teams using CPS processes, phase 2 is the assessment of the suitability of the learning activities. The samples used in this study are 5 professionals in the field of learning activity design, ubiquitous learning, information technology, creative thinking, and collaboration skills. The results showed that ubiquitous collaborative learning activities with virtual teams using CPS processes to develop creative thinking and collaboration skills consist of 3 main steps which are: 1) preparation before learning, 2) learning activities processing and 3) performance appraisal. The result of the learning activities suitability assessment from the professionals is in the highest level.

Keywords: ubiquitous learning, collaborative learning, virtual team, creative problem solving

Procedia PDF Downloads 507
11067 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 67
11066 Using Machine Learning to Classify Different Body Parts and Determine Healthiness

Authors: Zachary Pan

Abstract:

Our general mission is to solve the problem of classifying images into different body part types and deciding if each of them is healthy or not. However, for now, we will determine healthiness for only one-sixth of the body parts, specifically the chest. We will detect pneumonia in X-ray scans of those chest images. With this type of AI, doctors can use it as a second opinion when they are taking CT or X-ray scans of their patients. Another ad-vantage of using this machine learning classifier is that it has no human weaknesses like fatigue. The overall ap-proach to this problem is to split the problem into two parts: first, classify the image, then determine if it is healthy. In order to classify the image into a specific body part class, the body parts dataset must be split into test and training sets. We can then use many models, like neural networks or logistic regression models, and fit them using the training set. Now, using the test set, we can obtain a realistic accuracy the models will have on images in the real world since these testing images have never been seen by the models before. In order to increase this testing accuracy, we can also apply many complex algorithms to the models, like multiplicative weight update. For the second part of the problem, to determine if the body part is healthy, we can have another dataset consisting of healthy and non-healthy images of the specific body part and once again split that into the test and training sets. We then use another neural network to train on those training set images and use the testing set to figure out its accuracy. We will do this process only for the chest images. A major conclusion reached is that convolutional neural networks are the most reliable and accurate at image classification. In classifying the images, the logistic regression model, the neural network, neural networks with multiplicative weight update, neural networks with the black box algorithm, and the convolutional neural network achieved 96.83 percent accuracy, 97.33 percent accuracy, 97.83 percent accuracy, 96.67 percent accuracy, and 98.83 percent accuracy, respectively. On the other hand, the overall accuracy of the model that de-termines if the images are healthy or not is around 78.37 percent accuracy.

Keywords: body part, healthcare, machine learning, neural networks

Procedia PDF Downloads 95
11065 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 130
11064 The Design and Applied of Learning Management System via Social Media on Internet: Case Study of Operating System for Business Subject

Authors: Pimploi Tirastittam, Sawanath Treesathon, Amornrath Ongkawat

Abstract:

Learning Management System (LMS) is the system which uses to manage the learning in order to grouping the content and learning activity between the lecturer and learner including online examination and evaluation. Nowadays, it is the borderless learning era so the learning activities can be accessed from everywhere in the world and also anytime via the information technology and media. The learner can easily access to the knowledge so the different in time and distance is not a constraint for learning anymore. The learning pattern which was used in this research is the integration of the in-class learning and online learning via internet and will be able to monitor the progress by the Learning management system which will create the fast response and accessible learning process via the social media. In order to increase the capability and freedom of the learner, the system can show the current and history of the learning document, video conference and also has the chat room for the learner and lecturer to interact to each other. So the objectives of the “The Design and Applied of Learning Management System via Social Media on Internet: Case Study of Operating System for Business Subject” are to expand the opportunity of learning and to increase the efficiency of learning as well as increase the communication channel between lecturer and student. The data of this research was collect from 30 users of the system which are students who enroll in the subject. And the result of the research is in the “Very Good” which is conformed to the hypothesis.

Keywords: Learning Management System, social media, Operating System, information technology

Procedia PDF Downloads 350
11063 Analyzing the Quality of Cloud-Based E-Learning Systems on the Perception of the Learners and the Teachers

Authors: R. W. C. Devindi, S. M. Buddika Harshanath

Abstract:

E-learning is a widely used technology for learning in the modern world. With the pandemic situation the popularity of using e-learning has been increased in a larger capacity. The e-learning educational systems require software resources as well as hardware usually but it is hard for most of the education institutions to afford those resources. Also with the massive user load e-learning has to broaden the server side resources as well. Therefore, in the present cloud computing was implemented in order to make the e – learning systems more efficient. The researcher has analyzed the quality of the e-learning systems on the perception of the learners and the teachers with the aid of hypothesis and has given the analyzed results and the discussion in this report. Therefore, the future research will be able to get some steps to increase the quality of the online learning systems furthermore. In the case of e-learning, quality assurance and cost effectiveness are essential. A complex quality assurance system is used in the stated project. There are no well-defined standard evaluation measures in this field. As a result, accurately assessing the e-learning system's overall quality is challenging. The researcher has done the analysis with the aid of standard methods and software.

Keywords: LMS–learning management system, SPSS–statistical package for social sciences (software), eigen value, hypothesis

Procedia PDF Downloads 103
11062 Songwriting in the Postdigital Age: Using TikTok and Instagram as Online Informal Learning Technologies

Authors: Matthias Haenisch, Marc Godau, Julia Barreiro, Dominik Maxelon

Abstract:

In times of ubiquitous digitalization and the increasing entanglement of humans and technologies in musical practices in the 21st century, it is to be asked, how popular musicians learn in the (post)digital Age. Against the backdrop of the increasing interest in transferring informal learning practices into formal settings of music education the interdisciplinary research association »MusCoDA – Musical Communities in the (Post)Digital Age« (University of Erfurt/University of Applied Sciences Clara Hoffbauer Potsdam, funded by the German Ministry of Education and Research, pursues the goal to derive an empirical model of collective songwriting practices from the study of informal lelearningf songwriters and bands that can be translated into pedagogical concepts for music education in schools. Drawing on concepts from Community of Musical Practice and Actor Network Theory, lelearnings considered not only as social practice and as participation in online and offline communities, but also as an effect of heterogeneous networks composed of human and non-human actors. Learning is not seen as an individual, cognitive process, but as the formation and transformation of actor networks, i.e., as a practice of assembling and mediating humans and technologies. Based on video stimulated recall interviews and videography of online and offline activities, songwriting practices are followed from the initial idea to different forms of performance and distribution. The data evaluation combines coding and mapping methods of Grounded Theory Methodology and Situational Analysis. This results in network maps in which both the temporality of creative practices and the material and spatial relations of human and technological actors are reconstructed. In addition, positional analyses document the power relations between the participants that structure the learning process of the field. In the area of online informal lelearninginitial key research findings reveal a transformation of the learning subject through the specific technological affordances of TikTok and Instagram and the accompanying changes in the learning practices of the corresponding online communities. Learning is explicitly shaped by the material agency of online tools and features and the social practices entangled with these technologies. Thus, any human online community member can be invited to directly intervene in creative decisions that contribute to the further compositional and structural development of songs. At the same time, participants can provide each other with intimate insights into songwriting processes in progress and have the opportunity to perform together with strangers and idols. Online Lelearnings characterized by an increase in social proximity, distribution of creative agency and informational exchange between participants. While it seems obvious that traditional notions not only of lelearningut also of the learning subject cannot be maintained, the question arises, how exactly the observed informal learning practices and the subject that emerges from the use of social media as online learning technologies can be transferred into contexts of formal learning

Keywords: informal learning, postdigitality, songwriting, actor-network theory, community of musical practice, social media, TikTok, Instagram, apps

Procedia PDF Downloads 121
11061 When Ideological Intervention Backfires: The Case of the Iranian Clerical System’s Intervention in the Pandemic-Era Elementary Education

Authors: Hasti Ebrahimi

Abstract:

This study sheds light on the challenges and difficulties caused by the Iranian clerical system’s intervention in the country’s school education during the COVID-19 pandemic, when schools remained closed for almost two years. The pandemic brought Iranian elementary school education to a standstill for almost 6 months before the country developed a nationwide learning platform – a customized television network. While the initiative seemed to have been welcomed by the majority of Iranian parents, it resented some of the more traditional strata of the society, including the influential Friday Prayer Leaders who found the televised version of the elementary education ‘less spiritual’ and ‘more ‘material’ or science-based. That prompted the Iranian Channel of Education, the specialized television network that had been chosen to serve as a nationally televised school during the pandemic, to try to redefine much of its online elementary school educational content within the religious ideology of the Islamic Republic of Iran. As a result, young clergies appeared on the television screen as preachers of Islamic morality, religious themes and even sociology, history, and arts. The present research delves into the consequences of such an intervention, how it might have impacted the infrastructure of Iranian elementary education and whether or not the new ideology-infused curricula would withstand the opposition of students and mainstream teachers. The main methodology used in this study is Critical Discourse Analysis with a cognitive approach. It systematically finds and analyzes the alternative ideological structures of discourse in the Iranian Channel of Education from September 2021 to July 2022, when the clergy ‘teachers’ replaced ‘regular’ history and arts teachers on the television screen for the first time. It has aimed to assess how the various uses of the alternative ideological discourse in elementary school content have influenced the processes of learning: the acquisition of knowledge, beliefs, opinions, attitudes, abilities, and other cognitive and emotional changes, which are the goals of institutional education. This study has been an effort aimed at understanding and perhaps clarifying the relationships between the traditional textual structures and processing on the one hand and socio-cultural contexts created by the clergy teachers on the other. This analysis shows how the clerical portion of elementary education on the Channel of Education that seemed to have dominated the entire televised teaching and learning process faded away as the pandemic was contained and mainstream classes were restored. It nevertheless reflects the deep ideological rifts between the clerical approach to school education and the mainstream teaching process in Iranian schools. The semantic macrostructures of social content in the current Iranian elementary school education, this study suggests, have remained intact despite the temporary ideological intervention of the ruling clerical elite in their formulation and presentation. Finally, using thematic and schematic frameworks, the essay suggests that the ‘clerical’ social content taught on the Channel of Education during the pandemic cannot have been accepted cognitively by the channel’s target audience, including students and mainstream teachers.

Keywords: televised elementary school learning, Covid 19, critical discourse analysis, Iranian clerical ideology

Procedia PDF Downloads 47