Search results for: deep learning model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22514

Search results for: deep learning model

22064 Students’ learning Effects in Physical Education between Sport Education Model with TPSR and Traditional Teaching Model with TPSR

Authors: Yi-Hsiang Pan, Chen-Hui Huang, Ching-Hsiang Chen, Wei-Ting Hsu

Abstract:

The purposes of the study were to explore the students' learning effect of physical education curriculum between merging Teaching Personal and Social Responsibility (TPSR) with sport education model and TPSR with traditional teaching model, which these learning effects included sport self-efficacy, sport enthusiastic, group cohesion, responsibility and game performance. The participants include 3 high school physical education teachers and 6 physical education classes, 133 participants with experience group 75 students and control group 58 students, and each teacher taught an experimental group and a control group for 16 weeks. The research methods used questionnaire investigation, interview, focus group meeting. The research instruments included personal and social responsibility questionnaire, sport enthusiastic scale, group cohesion scale, sport self-efficacy scale and game performance assessment instrument. Multivariate Analysis of covariance and Repeated measure ANOVA were used to test difference of students' learning effects between merging TPSR with sport education model and TPSR with traditional teaching model. The findings of research were: 1) The sport education model with TPSR could improve students' learning effects, including sport self-efficacy, game performance, sport enthusiastic, group cohesion and responsibility. 2) The traditional teaching model with TPSR could improve students' learning effect, including sport self-efficacy, responsibility and game performance. 3) the sport education model with TPSR could improve more learning effects than traditional teaching model with TPSR, including sport self-efficacy, sport enthusiastic,responsibility and game performance. 4) Based on qualitative data about learning experience of teachers and students, sport education model with TPSR significant improve learning motivation, group interaction and game sense. The conclusions indicated sport education model with TPSR could improve more learning effects in physical education curriculum. On other hand, the curricular projects of hybrid TPSR-Sport Education model and TPSR-Traditional Teaching model are both good curricular projects of moral character education, which may be applied in school physical education.

Keywords: character education, sport season, game performance, sport competence

Procedia PDF Downloads 429
22063 CyberSteer: Cyber-Human Approach for Safely Shaping Autonomous Robotic Behavior to Comply with Human Intention

Authors: Vinicius G. Goecks, Gregory M. Gremillion, William D. Nothwang

Abstract:

Modern approaches to train intelligent agents rely on prolonged training sessions, high amounts of input data, and multiple interactions with the environment. This restricts the application of these learning algorithms in robotics and real-world applications, in which there is low tolerance to inadequate actions, interactions are expensive, and real-time processing and action are required. This paper addresses this issue introducing CyberSteer, a novel approach to efficiently design intrinsic reward functions based on human intention to guide deep reinforcement learning agents with no environment-dependent rewards. CyberSteer uses non-expert human operators for initial demonstration of a given task or desired behavior. The trajectories collected are used to train a behavior cloning deep neural network that asynchronously runs in the background and suggests actions to the deep reinforcement learning module. An intrinsic reward is computed based on the similarity between actions suggested and taken by the deep reinforcement learning algorithm commanding the agent. This intrinsic reward can also be reshaped through additional human demonstration or critique. This approach removes the need for environment-dependent or hand-engineered rewards while still being able to safely shape the behavior of autonomous robotic agents, in this case, based on human intention. CyberSteer is tested in a high-fidelity unmanned aerial vehicle simulation environment, the Microsoft AirSim. The simulated aerial robot performs collision avoidance through a clustered forest environment using forward-looking depth sensing and roll, pitch, and yaw references angle commands to the flight controller. This approach shows that the behavior of robotic systems can be shaped in a reduced amount of time when guided by a non-expert human, who is only aware of the high-level goals of the task. Decreasing the amount of training time required and increasing safety during training maneuvers will allow for faster deployment of intelligent robotic agents in dynamic real-world applications.

Keywords: human-robot interaction, intelligent robots, robot learning, semisupervised learning, unmanned aerial vehicles

Procedia PDF Downloads 246
22062 Numerical Evaluation of Deep Ground Settlement Induced by Groundwater Changes During Pumping and Recovery Test in Shanghai

Authors: Shuo Wang

Abstract:

The hydrogeological parameters of the engineering site and the hydraulic connection between the aquifers can be obtained by the pumping test. Through the recovery test, the characteristics of water level recovery and the law of surface subsidence recovery can be understood. The above two tests can provide the basis for subsequent engineering design. At present, the deformation of deep soil caused by pumping tests is often neglected. However, some studies have shown that the maximum settlement subject to groundwater drawdown is not necessarily on the surface but in the deep soil. In addition, the law of settlement recovery of each soil layer subject to water level recovery is not clear. If the deformation-sensitive structure is deep in the test site, safety accidents may occur. In this study, the pumping test and recovery test of a confined aquifer in Shanghai are introduced. The law of measured groundwater changes and surface subsidence are analyzed. In addition, the fluid-solid coupling model was established by ABAQUS based on the Biot consolidation theory. The models are verified by comparing the computed and measured results. Further, the variation law of water level and the deformation law of deep soil during pumping and recovery tests under different site conditions and different times and spaces are discussed through the above model. It is found that the maximum soil settlement caused by pumping in a confined aquifer is related to the permeability of the overlying aquitard and pumping time. There is a lag between soil deformation and groundwater changes, and the recovery rate of settlement deformation of each soil layer caused by the rise of water level is different. Finally, some possible research directions are proposed to provide new ideas for academic research in this field.

Keywords: coupled hydro-mechanical analysis, deep ground settlement, numerical simulation, pumping test, recovery test

Procedia PDF Downloads 24
22061 AI-Driven Forecasting Models for Anticipating Oil Market Trends and Demand

Authors: Gaurav Kumar Sinha

Abstract:

The volatility of the oil market, influenced by geopolitical, economic, and environmental factors, presents significant challenges for stakeholders in predicting trends and demand. This article explores the application of artificial intelligence (AI) in developing robust forecasting models to anticipate changes in the oil market more accurately. We delve into various AI techniques, including machine learning, deep learning, and time series analysis, that have been adapted to analyze historical data and current market conditions to forecast future trends. The study evaluates the effectiveness of these models in capturing complex patterns and dependencies in market data, which traditional forecasting methods often miss. Additionally, the paper discusses the integration of external variables such as political events, economic policies, and technological advancements that influence oil prices and demand. By leveraging AI, stakeholders can achieve a more nuanced understanding of market dynamics, enabling better strategic planning and risk management. The article concludes with a discussion on the potential of AI-driven models in enhancing the predictive accuracy of oil market forecasts and their implications for global economic planning and strategic resource allocation.

Keywords: AI forecasting, oil market trends, machine learning, deep learning, time series analysis, predictive analytics, economic factors, geopolitical influence, technological advancements, strategic planning

Procedia PDF Downloads 12
22060 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography

Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai

Abstract:

Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.

Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics

Procedia PDF Downloads 75
22059 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 116
22058 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 77
22057 Semantic Platform for Adaptive and Collaborative e-Learning

Authors: Massra M. Sabeima, Myriam lamolle, Mohamedade Farouk Nanne

Abstract:

Adapting the learning resources of an e-learning system to the characteristics of the learners is an important aspect to consider when designing an adaptive e-learning system. However, this adaptation is not a simple process; it requires the extraction, analysis, and modeling of user information. This implies a good representation of the user's profile, which is the backbone of the adaptation process. Moreover, during the e-learning process, collaboration with similar users (same geographic province or knowledge context) is important. Productive collaboration motivates users to continue or not abandon the course and increases the assimilation of learning objects. The contribution of this work is the following: we propose an adaptive e-learning semantic platform to recommend learning resources to learners, using ontology to model the user profile and the course content, furthermore an implementation of a multi-agent system able to progressively generate the learning graph (taking into account the user's progress, and the changes that occur) for each user during the learning process, and to synchronize the users who collaborate on a learning object.

Keywords: adaptative learning, collaboration, multi-agent, ontology

Procedia PDF Downloads 156
22056 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 50
22055 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 62
22054 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning

Authors: Yong Chen

Abstract:

To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.

Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference

Procedia PDF Downloads 105
22053 Leveraging Learning Analytics to Inform Learning Design in Higher Education

Authors: Mingming Jiang

Abstract:

This literature review aims to offer an overview of existing research on learning analytics and learning design, the alignment between the two, and how learning analytics has been leveraged to inform learning design in higher education. Current research suggests a need to create more alignment and integration between learning analytics and learning design in order to not only ground learning analytics on learning sciences but also enable data-driven decisions in learning design to improve learning outcomes. In addition, multiple conceptual frameworks have been proposed to enhance the synergy and alignment between learning analytics and learning design. Future research should explore this synergy further in the unique context of higher education, identifying learning analytics metrics in higher education that can offer insight into learning processes, evaluating the effect of learning analytics outcomes on learning design decision-making in higher education, and designing learning environments in higher education that make the capturing and deployment of learning analytics outcomes more efficient.

Keywords: learning analytics, learning design, big data in higher education, online learning environments

Procedia PDF Downloads 138
22052 Developing Learning in Organizations with Innovation Pedagogy Methods

Authors: T. Konst

Abstract:

Most jobs include training and communication tasks, but often the people in these jobs lack pedagogical competences to plan, implement and assess learning. This paper aims to discuss how a learning approach called innovation pedagogy developed in higher education can be utilized for learning development in various organizations. The methods presented how to implement innovation pedagogy such as process consultation and train the trainer model can provide added value to develop pedagogical knowhow in organizations and thus support their internal learning and development.

Keywords: innovation pedagogy, learning, organizational development, process consultation

Procedia PDF Downloads 345
22051 Numerical Modeling of Various Support Systems to Stabilize Deep Excavations

Authors: M. Abdallah

Abstract:

Urban development requires deep excavations near buildings and other structures. Deep excavation has become more a necessity for better utilization of space as the population of the world has dramatically increased. In Lebanon, some urban areas are very crowded and lack spaces for new buildings and underground projects, which makes the usage of underground space indispensable. In this paper, a numerical modeling is performed using the finite element method to study the deep excavation-diaphragm wall soil-structure interaction in the case of nonlinear soil behavior. The study is focused on a comparison of the results obtained using different support systems. Furthermore, a parametric study is performed according to the remoteness of the structure.

Keywords: deep excavation, ground anchors, interaction soil-structure, struts

Procedia PDF Downloads 391
22050 Social Collaborative Learning Model Based on Proactive Involvement to Promote the Global Merit Principle in Cultivating Youths' Morality

Authors: Wera Supa, Panita Wannapiroon

Abstract:

This paper is a report on the designing of the social collaborative learning model based on proactive involvement to Promote the global merit principle in cultivating youths’ morality. The research procedures into two phases, the first phase is to design the social collaborative learning model based on proactive involvement to promote the global merit principle in cultivating youths’ morality, and the second is to evaluate the social collaborative learning model based on proactive involvement. The sample group in this study consists of 15 experts who are dominant in proactive participation, moral merit principle and youths’ morality cultivation from executive level, lecturers and the professionals in information and communication technology expertise selected using the purposive sampling method. Data analyzed by arithmetic mean and standard deviation. This study has explored that there are four significant factors in promoting the hands-on collaboration of global merit scheme in order to implant virtues to adolescences which are: 1) information and communication Technology Usage; 2) proactive involvement; 3) morality cultivation policy, and 4) global merit principle. The experts agree that the social collaborative learning model based on proactive involvement is highly appropriate.

Keywords: social collaborative learning, proactive involvement, global merit principle, morality

Procedia PDF Downloads 369
22049 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 110
22048 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks

Authors: Chad Brown

Abstract:

This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.

Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes

Procedia PDF Downloads 12
22047 Predicting Shortage of Hospital Beds during COVID-19 Pandemic in United States

Authors: Saba Ebrahimi, Saeed Ahmadian, Hedie Ashrafi

Abstract:

World-wide spread of coronavirus grows the concern about planning for the excess demand of hospital services in response to COVID-19 pandemic. The surge in the hospital services demand beyond the current capacity leads to shortage of ICU beds and ventilators in some parts of US. In this study, we forecast the required number of hospital beds and possible shortage of beds in US during COVID-19 pandemic to be used in the planning and hospitalization of new cases. In this paper, we used a data on COVID-19 deaths and patients’ hospitalization besides the data on hospital capacities and utilization in US from publicly available sources and national government websites. we used a novel ensemble modelling of deep learning networks, based on stacking different linear and non-linear layers to predict the shortage in hospital beds. The results showed that our proposed approach can predict the excess hospital beds demand very well and this can be helpful in developing strategies and plans to mitigate this gap.

Keywords: COVID-19, deep learning, ensembled models, hospital capacity planning

Procedia PDF Downloads 137
22046 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 154
22045 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 137
22044 Active Development of Tacit Knowledge Using Social Media and Learning Communities

Authors: John Zanetich

Abstract:

This paper uses a pragmatic research approach to investigate the relationships between Active Development of Tacit Knowledge (ADTK), social media (Facebook) and classroom learning communities. This paper investigates the use of learning communities and social media as the context and means for changing tacit knowledge to explicit and presents a dynamic model of the development of a classroom learning community. The goal of this study is to identify the point that explicit knowledge is converted to tacit knowledge and to test a way to quantify the exchange using social media and learning communities.

Keywords: tacit knowledge, knowledge management, college programs, experiential learning, learning communities

Procedia PDF Downloads 343
22043 Interaction Tasks of CUE Model in Virtual Language Learning in Travel English for Taiwanese College EFL Learners

Authors: Kuei-Hao Li, Eden Huang

Abstract:

Motivation suggests the willingness one person has towards taking action. Learners’ motivation has frequently been regarded as the most crucial factor in successful language acquisition. Without sufficient motivation, learners cannot achieve long-term learning goals despite remarkable abilities. Therefore, the study aims to investigate motivation of interaction tasks designed by the researchers for college EFL learners in Travel English class in virtual reality environment, integrating CUE model, Cognition, Usage and Expansion in the course. Thirty college learners were asked to join the virtual language learning website designed by the researchers. Data was collected via feedback questionnaire, interview, and learner interactions. The findings indicated that the course in the CUE model in language learning website of virtual reality environment was effective at motivating EFL learners and improving their oral communication and social interactions in the learning process. Some pedagogical implications are also provided in helping both language instructors and EFL learners in virtual reality environment.

Keywords: motivation, virtual reality, virtual language learning, second language acquisition

Procedia PDF Downloads 366
22042 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 265
22041 Reflection on Using Bar Model Method in Learning and Teaching Primary Mathematics: A Hong Kong Case Study

Authors: Chui Ka Shing

Abstract:

This case study research attempts to examine the use of the Bar Model Method approach in learning and teaching mathematics in a primary school in Hong Kong. The objectives of the study are to find out to what extent (a) the Bar Model Method approach enhances the construction of students’ mathematics concepts, and (b) the school-based mathematics curriculum development with adopting the Bar Model Method approach. This case study illuminates the effectiveness of using the Bar Model Method to solve mathematics problems from Primary 1 to Primary 6. Some effective pedagogies and assessments were developed to strengthen the use of the Bar Model Method across year levels. Suggestions including school-based curriculum development for using Bar Model Method and further study were discussed.

Keywords: bar model method, curriculum development, mathematics education, problem solving

Procedia PDF Downloads 203
22040 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 55
22039 A Collaborative Teaching and Learning Model between Academy and Industry for Multidisciplinary Engineering Education

Authors: Moon-Soo Kim

Abstract:

In order to cope with the increasing demand for multidisciplinary learning between academy and industry, a collaborative teaching and learning model and related operational tools enabling applications to engineering education are essential. This study proposes a web-based collaborative framework for interactive teaching and learning between academy and industry as an initial step for the development of a web- and mobile-based integrated system for both engineering students and industrial practitioners. The proposed web-based collaborative teaching and learning framework defines several entities such as learner, solver and supporter or sponsor for industrial problems, and also has a systematic architecture to build information system including diverse functions enabling effective interaction among the defined entities regardless of time and places. Furthermore, the framework, which includes knowledge and information self-reinforcing mechanism, focuses on the previous problem-solving records as well as subsequent learners’ creative reusing in solving process of new problems.

Keywords: collaborative teaching and learning model, academy and industry, web-based collaborative framework, self-reinforcing mechanism

Procedia PDF Downloads 306
22038 Observational Learning in Ecotourism: An Investigation into Ecotourists' Environmentally Responsible Behavioral Intentions in South Korea

Authors: Benjamin Morse, Michaela Zint, Jennifer Carman

Abstract:

This study proposes a behavioral model in which ecotourists’ level of observational learning shapes their subsequent environmentally responsible behavioral intentions through ecotourism participation. Unlike past studies that have focused on individual attributes such as attitudes, locus of control, personal responsibility, knowledge, skills or effect, this present study explores select social attributes as potential antecedents to environmentally responsible behaviors. A total of 207 completed questionnaires were obtained from ecotourists in Korea and path analyses were conducted to explore the degree in which the hypothesized model directly and indirectly explained ecotourists’ environmentally responsible behavioral intentions. Results suggest that observational learning and its associated predictors (i.e., engagement, observation, reproduction and reinforcement) are key determinants of ecotourists environmentally responsible behavioral intentions. The application of observational learning proved to be informative, and has a number of implications for improving ecotourism programs. Our model also lays out a theoretical framework for future research.

Keywords: ecotourism, observational learning, environmentally responsible behavior, social learning theory

Procedia PDF Downloads 311
22037 Serious Game for Learning: A Model for Efficient Game Development

Authors: Zahara Abdulhussan Al-Awadai

Abstract:

In recent years, serious games have started to gain an increasing interest as a tool to support learning across different educational and training fields. It began to serve as a powerful educational tool for improving learning outcomes. In this research, we discuss the potential of virtual experiences and games research outside of the games industry and explore the multifaceted impact of serious games and related technologies on various aspects of our lives. We highlight the usage of serious games as a tool to improve education and other applications with a purpose beyond the entertainment industry. One of the main contributions of this research is proposing a model that facilitates the design and development of serious games in a flexible and easy-to-use way. This is achieved by exploring different requirements to develop a model that describes a serious game structure with a focus on both aspects of serious games (educational and entertainment aspects).

Keywords: game development, requirements, serious games, serious game model

Procedia PDF Downloads 40
22036 Assisting Dating of Greek Papyri Images with Deep Learning

Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou

Abstract:

Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.

Keywords: image classification, papyri images, dating

Procedia PDF Downloads 62
22035 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario

Authors: Sarita Agarwal, Deepika Delsa Dean

Abstract:

Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.

Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation

Procedia PDF Downloads 105