Search results for: covariation reasoning
11 Active Learning Methods in Mathematics
Authors: Daniela Velichová
Abstract:
Plenty of ideas on how to adopt active learning methods in education are available nowadays. Mathematics is a subject where the active involvement of students is required in particular in order to achieve desirable results regarding sustainable knowledge and deep understanding. The present article is based on the outcomes of an Erasmus+ project DrIVE-MATH, that was aimed at developing a novel and integrated framework to teach maths classes in engineering courses at the university level. It is fundamental for students from the early years of their academic life to have agile minds. They must be prepared to adapt to their future working environments, where enterprises’ views are always evolving, where all collaborate in teams, and relations between peers are thought for the well-being of the whole - workers and company profit. This reality imposes new requirements on higher education in terms of adaptation of different pedagogical methods, such as project-based and active-learning methods used within the course curricula. Active learning methodologies are regarded as an effective way to prepare students to meet the challenges posed by enterprises and to help them in building critical thinking, analytic reasoning, and insight to the solved complex problems from different perspectives. Fostering learning-by-doing activities in the pedagogical process can help students to achieve learning independence, as they could acquire deeper conceptual understanding by experimenting with the abstract concept in a more interesting, useful, and meaningful way. Clear information about learning outcomes and goals might help students to take more responsibility for their learning results. Active learning methods implemented by the project team members in their teaching practice, eduScrum and Jigsaw in particular, proved to provide better scientific and soft skills support to students than classical teaching methods. EduScrum method enables teachers to generate a working environment that stimulates students' working habits and self-initiative as they become aware of their responsibilities within the team, their own acquired knowledge, and their abilities to solve problems independently, though in collaboration with other team members. This method enhances collaborative learning, as students are working in teams towards a common goal - knowledge acquisition, while they are interacting with each other and evaluated individually. Teams consisting of 4-5 students work together on a list of problems - sprint; each member is responsible for solving one of them, while the group leader – a master, is responsible for the whole team. A similar principle is behind the Jigsaw technique, where the classroom activity makes students dependent on each other to succeed. Students are divided into groups, and assignments are split into pieces, which need to be assembled by the whole group to complete the (Jigsaw) puzzle. In this paper, analysis of students’ perceptions concerning the achievement of deeper conceptual understanding in mathematics and the development of soft skills, such as self-motivation, critical thinking, flexibility, leadership, responsibility, teamwork, negotiation, and conflict management, is presented. Some new challenges are discussed as brought by introducing active learning methods in the basic mathematics courses. A few examples of sprints developed and used in teaching basic maths courses at technical universities are presented in addition.Keywords: active learning methods, collaborative learning, conceptual understanding, eduScrum, Jigsaw, soft skills
Procedia PDF Downloads 5510 A Digital Environment for Developing Mathematical Abilities in Children with Autism Spectrum Disorder
Authors: M. Isabel Santos, Ana Breda, Ana Margarida Almeida
Abstract:
Research on academic abilities of individuals with autism spectrum disorder (ASD) underlines the importance of mathematics interventions. Yet the proposal of digital applications for children and youth with ASD continues to attract little attention, namely, regarding the development of mathematical reasoning, being the use of the digital technologies an area of great interest for individuals with this disorder and its use is certainly a facilitative strategy in the development of their mathematical abilities. The use of digital technologies can be an effective way to create innovative learning opportunities to these students and to develop creative, personalized and constructive environments, where they can develop differentiated abilities. The children with ASD often respond well to learning activities involving information presented visually. In this context, we present the digital Learning Environment on Mathematics for Autistic children (LEMA) that was a research project conducive to a PhD in Multimedia in Education and was developed by the Thematic Line Geometrix, located in the Department of Mathematics, in a collaboration effort with DigiMedia Research Center, of the Department of Communication and Art (University of Aveiro, Portugal). LEMA is a digital mathematical learning environment which activities are dynamically adapted to the user’s profile, towards the development of mathematical abilities of children aged 6–12 years diagnosed with ASD. LEMA has already been evaluated with end-users (both students and teacher’s experts) and based on the analysis of the collected data readjustments were made, enabling the continuous improvement of the prototype, namely considering the integration of universal design for learning (UDL) approaches, which are of most importance in ASD, due to its heterogeneity. The learning strategies incorporated in LEMA are: (i) provide options to custom choice of math activities, according to user’s profile; (ii) integrates simple interfaces with few elements, presenting only the features and content needed for the ongoing task; (iii) uses a simple visual and textual language; (iv) uses of different types of feedbacks (auditory, visual, positive/negative reinforcement, hints with helpful instructions including math concept definitions, solved math activities using split and easier tasks and, finally, the use of videos/animations that show a solution to the proposed activity); (v) provides information in multiple representation, such as text, video, audio and image for better content and vocabulary understanding in order to stimulate, motivate and engage users to mathematical learning, also helping users to focus on content; (vi) avoids using elements that distract or interfere with focus and attention; (vii) provides clear instructions and orientation about tasks to ease the user understanding of the content and the content language, in order to stimulate, motivate and engage the user; and (viii) uses buttons, familiarly icons and contrast between font and background. Since these children may experience little sensory tolerance and may have an impaired motor skill, besides the user to have the possibility to interact with LEMA through the mouse (point and click with a single button), the user has the possibility to interact with LEMA through Kinect device (using simple gesture moves).Keywords: autism spectrum disorder, digital technologies, inclusion, mathematical abilities, mathematical learning activities
Procedia PDF Downloads 1169 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules
Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel
Abstract:
The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling
Procedia PDF Downloads 1338 Blended Learning Instructional Approach to Teach Pharmaceutical Calculations
Authors: Sini George
Abstract:
Active learning pedagogies are valued for their success in increasing 21st-century learners’ engagement, developing transferable skills like critical thinking or quantitative reasoning, and creating deeper and more lasting educational gains. 'Blended learning' is an active learning pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter. This project aimed to develop a blended learning instructional approach to teaching concepts around pharmaceutical calculations to year 1 pharmacy students. The wrong dose, strength or frequency of a medication accounts for almost a third of medication errors in the NHS therefore, progression to year 2 requires a 70% pass in this calculation test, in addition to the standard progression requirements. Many students were struggling to achieve this requirement in the past. It was also challenging to teach these concepts to students of a large class (> 130) with mixed mathematical abilities, especially within a traditional didactic lecture format. Therefore, short screencasts with voice-over of the lecturer were provided in advance of a total of four teaching sessions (two hours/session), incorporating core content of each session and talking through how they approached the calculations to model metacognition. Links to the screencasts were posted on the learning management. Viewership counts were used to determine that the students were indeed accessing and watching the screencasts on schedule. In the classroom, students had to apply the knowledge learned beforehand to a series of increasingly difficult set of questions. Students were then asked to create a question in group settings (two students/group) and to discuss the questions created by their peers in their groups to promote deep conceptual learning. Students were also given time for question-and-answer period to seek clarifications on the concepts covered. Student response to this instructional approach and their test grades were collected. After collecting and organizing the data, statistical analysis was carried out to calculate binomial statistics for the two data sets: the test grade for students who received blended learning instruction and the test grades for students who received instruction in a standard lecture format in class, to compare the effectiveness of each type of instruction. Student response and their performance data on the assessment indicate that the learning of content in the blended learning instructional approach led to higher levels of student engagement, satisfaction, and more substantial learning gains. The blended learning approach enabled each student to learn how to do calculations at their own pace freeing class time for interactive application of this knowledge. Although time-consuming for an instructor to implement, the findings of this research demonstrate that the blended learning instructional approach improves student academic outcomes and represents a valuable method to incorporate active learning methodologies while still maintaining broad content coverage. Satisfaction with this approach was high, and we are currently developing more pharmacy content for delivery in this format.Keywords: active learning, blended learning, deep conceptual learning, instructional approach, metacognition, pharmaceutical calculations
Procedia PDF Downloads 1727 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network
Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman
Abstract:
We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.Keywords: autonomous surveillance, Bayesian reasoning, decision support, interventions, patterns of life, predictive analytics, predictive insights
Procedia PDF Downloads 1166 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2775 Religion and Risk: Unmasking Noah's Narratives in the Pacific Islands
Authors: A. Kolendo
Abstract:
Pacific Islands are one of the most vulnerable areas to climate change. Sea level rise and accelerating storm surge continuously threaten the communities' habitats on low-lying atolls. With scientific predictions of encroaching tides on their land, the Islanders have been informed about the need for future relocation planning. However, some communities oppose such retreat strategies through the reasoning that comprehends current climatic changes through the lenses of the biblical ark of Noah. This parable states God's promise never to flood the Earth again and never deprive people of their land and habitats. Several interpretations of this parable emerged in Oceania, prompting either climate action or denial. Resistance to relocation planning expressed through Christian thoughts led religion to be perceived as a barrier to dialogue between the Islanders and scientists. Since climate change concerns natural processes, the attitudes towards environmental stewardship prompt the communities' responses to it; some Christian teachings indicate humanity's responsibility over the environment, whereas others ascertain the people's dominion, which prompts resistance and sometimes denial. With church denominations and their various environmental standpoints, competing responses to climate change emerged in Oceania. Before miss-ionization, traditional knowledge had guided the environmental sphere, influencing current Christian teachings. Each atoll characterizes a distinctive manner of traditional knowledge; however, the unique relationship with nature unites all islands. The interconnectedness between the land, sea and people indicates the integrity between the communities and their environments. Such a factor influences the comprehension of Noah's story in the context of climate change that threatens their habitats. Pacific Islanders experience climate change through the slow disappearance of their homelands. However, the Western world perceives it as a global issue that will affect the population in the long-term perspective. Therefore, the Islanders seek to comprehend this global phenomenon in a local context that reads climate change as the Great Deluge. Accordingly, the safety measures that this parable promotes compensate for the danger of climate change. The rainbow covenant gives hope in God's promise never to flood the Earth again. At the same time, Noah's survival relates to the Islanders' current situation. Since these communities have the lowest carbon emissions rate, their contribution to anthropogenic climate change is scarce. Therefore, the lack of environmental sin would contextualize them as contemporary Noah with the ultimate survival of sea level rise. This study aims to defy religion constituting a barrier through secondary data analysis from a risk compensation perspective. Instead, religion is portrayed as a source of knowledge that enables comprehension of the communities' situation. By demonstrating that the Pacific Islanders utilize Noah's story as a vessel for coping with the danger of climate change, the study argues that religion provides safety measures that compensate for the future projections of land's disappearance. The purpose is to build a bridge between religious communities and scientific bodies and ultimately bring an understanding of two diverse perspectives. By addressing the practical challenges of interdisciplinary research with faith-based systems, this study uplifts the voices of communities and portrays their experiences expressed through Christian thoughts.Keywords: Christianity, climate change, existential threat, Pacific Islands, story of Noah
Procedia PDF Downloads 954 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 1763 EcoTeka, an Open-Source Software for Urban Ecosystem Restoration through Technology
Authors: Manon Frédout, Laëtitia Bucari, Mathias Aloui, Gaëtan Duhamel, Olivier Rovellotti, Javier Blanco
Abstract:
Ecosystems must be resilient to ensure cleaner air, better water and soil quality, and thus healthier citizens. Technology can be an excellent tool to support urban ecosystem restoration projects, especially when based on Open Source and promoting Open Data. This is the goal of the ecoTeka application: one single digital tool for tree management which allows decision-makers to improve their urban forestry practices, enabling more responsible urban planning and climate change adaptation. EcoTeka provides city councils with three main functionalities tackling three of their challenges: easier biodiversity inventories, better green space management, and more efficient planning. To answer the cities’ need for reliable tree inventories, the application has been first built with open data coming from the websites OpenStreetMap and OpenTrees, but it will also include very soon the possibility of creating new data. To achieve this, a multi-source algorithm will be elaborated, based on existing artificial intelligence Deep Forest, integrating open-source satellite images, 3D representations from LiDAR, and street views from Mapillary. This data processing will permit identifying individual trees' position, height, crown diameter, and taxonomic genus. To support urban forestry management, ecoTeka offers a dashboard for monitoring the city’s tree inventory and trigger alerts to inform about upcoming due interventions. This tool was co-constructed with the green space departments of the French cities of Alès, Marseille, and Rouen. The third functionality of the application is a decision-making tool for urban planning, promoting biodiversity and landscape connectivity metrics to drive ecosystem restoration roadmap. Based on landscape graph theory, we are currently experimenting with new methodological approaches to scale down regional ecological connectivity principles to local biodiversity conservation and urban planning policies. This methodological framework will couple graph theoretic approach and biological data, mainly biodiversity occurrences (presence/absence) data available on both international (e.g., GBIF), national (e.g., Système d’Information Nature et Paysage) and local (e.g., Atlas de la Biodiversté Communale) biodiversity data sharing platforms in order to help reasoning new decisions for ecological networks conservation and restoration in urban areas. An experiment on this subject is currently ongoing with Montpellier Mediterranee Metropole. These projects and studies have shown that only 26% of tree inventory data is currently geo-localized in France - the rest is still being done on paper or Excel sheets. It seems that technology is not yet used enough to enrich the knowledge city councils have about biodiversity in their city and that existing biodiversity open data (e.g., occurrences, telemetry, or genetic data), species distribution models, landscape graph connectivity metrics are still underexploited to make rational decisions for landscape and urban planning projects. This is the goal of ecoTeka: to support easier inventories of urban biodiversity and better management of urban spaces through rational planning and decisions relying on open databases. Future studies and projects will focus on the development of tools for reducing the artificialization of soils, selecting plant species adapted to climate change, and highlighting the need for ecosystem and biodiversity services in cities.Keywords: digital software, ecological design of urban landscapes, sustainable urban development, urban ecological corridor, urban forestry, urban planning
Procedia PDF Downloads 732 Prospects of Acellular Organ Scaffolds for Drug Discovery
Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen
Abstract:
Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering
Procedia PDF Downloads 3011 XAI Implemented Prognostic Framework: Condition Monitoring and Alert System Based on RUL and Sensory Data
Authors: Faruk Ozdemir, Roy Kalawsky, Peter Hubbard
Abstract:
Accurate estimation of RUL provides a basis for effective predictive maintenance, reducing unexpected downtime for industrial equipment. However, while models such as the Random Forest have effective predictive capabilities, they are the so-called ‘black box’ models, where interpretability is at a threshold to make critical diagnostic decisions involved in industries related to aviation. The purpose of this work is to present a prognostic framework that embeds Explainable Artificial Intelligence (XAI) techniques in order to provide essential transparency in Machine Learning methods' decision-making mechanisms based on sensor data, with the objective of procuring actionable insights for the aviation industry. Sensor readings have been gathered from critical equipment such as turbofan jet engine and landing gear, and the prediction of the RUL is done by a Random Forest model. It involves steps such as data gathering, feature engineering, model training, and evaluation. These critical components’ datasets are independently trained and evaluated by the models. While suitable predictions are served, their performance metrics are reasonably good; such complex models, however obscure reasoning for the predictions made by them and may even undermine the confidence of the decision-maker or the maintenance teams. This is followed by global explanations using SHAP and local explanations using LIME in the second phase to bridge the gap in reliability within industrial contexts. These tools analyze model decisions, highlighting feature importance and explaining how each input variable affects the output. This dual approach offers a general comprehension of the overall model behavior and detailed insight into specific predictions. The proposed framework, in its third component, incorporates the techniques of causal analysis in the form of Granger causality tests in order to move beyond correlation toward causation. This will not only allow the model to predict failures but also present reasons, from the key sensor features linked to possible failure mechanisms to relevant personnel. The causality between sensor behaviors and equipment failures creates much value for maintenance teams due to better root cause identification and effective preventive measures. This step contributes to the system being more explainable. Surrogate Several simple models, including Decision Trees and Linear Models, can be used in yet another stage to approximately represent the complex Random Forest model. These simpler models act as backups, replicating important jobs of the original model's behavior. If the feature explanations obtained from the surrogate model are cross-validated with the primary model, the insights derived would be more reliable and provide an intuitive sense of how the input variables affect the predictions. We then create an iterative explainable feedback loop, where the knowledge learned from the explainability methods feeds back into the training of the models. This feeds into a cycle of continuous improvement both in model accuracy and interpretability over time. By systematically integrating new findings, the model is expected to adapt to changed conditions and further develop its prognosis capability. These components are then presented to the decision-makers through the development of a fully transparent condition monitoring and alert system. The system provides a holistic tool for maintenance operations by leveraging RUL predictions, feature importance scores, persistent sensor threshold values, and autonomous alert mechanisms. Since the system will provide explanations for the predictions given, along with active alerts, the maintenance personnel can make informed decisions on their end regarding correct interventions to extend the life of the critical machinery.Keywords: predictive maintenance, explainable artificial intelligence, prognostic, RUL, machine learning, turbofan engines, C-MAPSS dataset
Procedia PDF Downloads 9