Search results for: repositories
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 60

Search results for: repositories

30 Comparative Analysis of Automation Testing Tools

Authors: Amit Bhanushali

Abstract:

In the ever-changing landscape of software development, automated software testing has emerged as a critical component of the Software Development Life Cycle (SDLC). This research undertakes a comparative study of three major automated testing tools -UFT, Selenium, and RPA- evaluating them on usability, maintenance, and effectiveness. Leveraging existing JAVA-based applications as test cases, the study aims to guide testers in selecting the optimal tool for specific applications. By exploring key features such as source and licensing, testing expenses, object repositories, usability, and language support, the research provides practical insights into UFT, Selenium, and RPA. Acknowledging the pivotal role of these tools in streamlining testing processes amid time constraints and resource limitations, the study assists professionals in making informed choices aligned with their organizational needs.

Keywords: software testing tools, software development lifecycle (SDLC), test automation frameworks, automated software, JAVA-based, UFT, selenium and RPA (robotic process automation), source and licensing, object repository

Procedia PDF Downloads 60
29 Repositioning Nigerian University Libraries for Effective Information Provision and Delivery in This Age of Globalization

Authors: S. O. Uwaifo

Abstract:

The paper examines the pivotal role of the library in university education through the provision of a wide range of information materials (print and non- print) required for the teaching, learning and research activities of the university. However certain impediments to the effectiveness of Nigerian university libraries, such as financial constraints, high foreign exchange, global disparities in accessing the internet, lack of local area networks, erratic electric power supply, absence of ICT literacy, poor maintenance culture, etc., were identified. Also, the necessity of repositioning Nigerian university libraries for effective information provision and delivery was stressed by pointing out their dividends, such as users’ access to Directory of Open Access Journals (DOAJ), Online Public Access Catalogue (OPAC), Institutional Repositories, Electronic Document Delivery, Social Media Networks, etc. It therefore becomes necessary for the libraries to be repositioned by way of being adequately automated or digitized for effective service delivery, in this age of globalization. Based on the identified barriers by this paper, some recommendations were proffered.

Keywords: repositioning, Nigerian university libraries, effective information provision and delivery, globalization

Procedia PDF Downloads 301
28 Introducing, Testing, and Evaluating a Unified JavaScript Framework for Professional Online Studies

Authors: Caspar Goeke, Holger Finger, Dorena Diekamp, Peter König

Abstract:

Online-based research has recently gained increasing attention from various fields of research in the cognitive sciences. Technological advances in the form of online crowdsourcing (Amazon Mechanical Turk), open data repositories (Open Science Framework), and online analysis (Ipython notebook) offer rich possibilities to improve, validate, and speed up research. However, until today there is no cross-platform integration of these subsystems. Furthermore, implementation of online studies still suffers from the complex implementation (server infrastructure, database programming, security considerations etc.). Here we propose and test a new JavaScript framework that enables researchers to conduct any kind of behavioral research in the browser without the need to program a single line of code. In particular our framework offers the possibility to manipulate and combine the experimental stimuli via a graphical editor, directly in the browser. Moreover, we included an action-event system that can be used to handle user interactions, interactively change stimuli properties or store participants’ responses. Besides traditional recordings such as reaction time, mouse and keyboard presses, the tool offers webcam based eye and face-tracking. On top of these features our framework also takes care about the participant recruitment, via crowdsourcing platforms such as Amazon Mechanical Turk. Furthermore, the build in functionality of google translate will ensure automatic text translations of the experimental content. Thereby, thousands of participants from different cultures and nationalities can be recruited literally within hours. Finally, the recorded data can be visualized and cleaned online, and then exported into the desired formats (csv, xls, sav, mat) for statistical analysis. Alternatively, the data can also be analyzed online within our framework using the integrated Ipython notebook. The framework was designed such that studies can be used interchangeably between researchers. This will support not only the idea of open data repositories but also constitutes the possibility to share and reuse the experimental designs and analyses such that the validity of the paradigms will be improved. Particularly, sharing and integrating the experimental designs and analysis will lead to an increased consistency of experimental paradigms. To demonstrate the functionality of the framework we present the results of a pilot study in the field of spatial navigation that was conducted using the framework. Specifically, we recruited over 2000 subjects with various cultural backgrounds and consequently analyzed performance difference in dependence on the factors culture, gender and age. Overall, our results demonstrate a strong influence of cultural factors in spatial cognition. Such an influence has not yet been reported before and would not have been possible to show without the massive amount of data collected via our framework. In fact, these findings shed new lights on cultural differences in spatial navigation. As a consequence we conclude that our new framework constitutes a wide range of advantages for online research and a methodological innovation, by which new insights can be revealed on the basis of massive data collection.

Keywords: cultural differences, crowdsourcing, JavaScript framework, methodological innovation, online data collection, online study, spatial cognition

Procedia PDF Downloads 233
27 Strategies by National Health Systems in the Northern Hemisphere Against COVID-19

Authors: Aysha Zahidie, Meesha Iqbal

Abstract:

This paper aims to assess the effectiveness of strategies adopted by national health systems across the globe in different ‘geographical regions’ in the Northern Hemisphere to combat COVID-19 pandemic. Data is included from the first case reported in November 2019 till mid-April 2020. Sources of information are COVID-19 case repositories, official country websites, university research teams’ perspectives, official briefings, and available published research articles to date. We triangulated all data to formulate a comprehensive illustration of COVID-19 situation in each country included. It has been found that the 2002-2004 SARS outbreak experienced in China, Taiwan, and South Korea saw better strategies adopted by leadership to combat COVID-19 pandemic containment as compared to Iran, Italy, and the United States of America. Saudi Arabia has so far been successful in the implementation of containment strategies as there have been no large outbreaks in major cities or confined areas such as prisons. The situation has yet to unfold in India and Pakistan, which exhibit their own weaknesses in policy formulation or implementation in response to health crises.

Keywords: national health systems, COVID-19, prevention, response

Procedia PDF Downloads 91
26 Identifying Metabolic Pathways Associated with Neuroprotection Mediated by Tibolone in Human Astrocytes under an Induced Inflammatory Model

Authors: Daniel Osorio, Janneth Gonzalez, Andres Pinzon

Abstract:

In this work, proteins and metabolic pathways associated with the neuroprotective response mediated by the synthetic neurosteroid tibolone under a palmitate-induced inflammatory model were identified by flux balance analysis (FBA). Three different metabolic scenarios (‘healthy’, ‘inflamed’ and ‘medicated’) were modeled over a gene expression data-driven constructed tissue-specific metabolic reconstruction of mature astrocytes. Astrocyte reconstruction was built, validated and constrained using three open source software packages (‘minval’, ‘g2f’ and ‘exp2flux’) released through the Comprehensive R Archive Network repositories during the development of this work. From our analysis, we predict that tibolone executes their neuroprotective effects through a reduction of neurotoxicity mediated by L-glutamate in astrocytes, inducing the activation several metabolic pathways with neuroprotective actions associated such as taurine metabolism, gluconeogenesis, calcium and the Peroxisome Proliferator Activated Receptor signaling pathways. Also, we found a tibolone associated increase in growth rate probably in concordance with previously reported side effects of steroid compounds in other human cell types.

Keywords: astrocytes, flux balance analysis, genome scale metabolic reconstruction, inflammation, neuroprotection, tibolone

Procedia PDF Downloads 199
25 User-Driven Product Line Engineering for Assembling Large Families of Software

Authors: Zhaopeng Xuan, Yuan Bian, C. Cailleaux, Jing Qin, S. Traore

Abstract:

Traditional software engineering allows engineers to propose to their clients multiple specialized software distributions assembled from a shared set of software assets. The management of these assets however requires a trade-off between client satisfaction and software engineering process. Clients have more and more difficult to find a distribution or components based on their needs from all of distributed repositories. This paper proposes a software engineering for a user-driven software product line in which engineers define a feature model but users drive the actual software distribution on demand. This approach makes the user become final actor as a release manager in software engineering process, increasing user product satisfaction and simplifying user operations to find required components. In addition, it provides a way for engineers to manage and assembly large software families. As a proof of concept, a user-driven software product line is implemented for eclipse, an integrated development environment. An eclipse feature model is defined, which is exposed to users on a cloud-based built platform from which clients can download individualized Eclipse distributions.

Keywords: software product line, model-driven development, reverse engineering and refactoring, agile method

Procedia PDF Downloads 409
24 Prototyping the Problem Oriented Medical Record for Connected Health Based on TypeGraphQL

Authors: Sabah Mohammed, Jinan Fiaidhi, Darien Sawyer

Abstract:

Data integration of health through connected services can save lives in the event of a medical emergency or provide efficient and effective interventions for the benefit of the patients through the integration of bedside and bench side clinical research. Such integration will support all wind of change in healthcare by being predictive, pre-emptive, personalized, problem-oriented and participatory. Prototyping a healthcare system that enables data integration has been a big challenge for healthcare for a long time. However, an innovative solution started to emerge by focusing on problem lists where everything can connect the problem list forming a growing graph. This notion was introduced by Dr. Lawrence Weed in early 70’s, but the enabling technologies weren’t mature enough to provide a successful implementation prototype. In this article, we are describing our efforts in prototyping Dr. Lawrence Weed's problem-oriented medical record (POMR) and his patient case schema (SOAP) to shape a prototype for connected health. For this, we are using the TypeGraphQL API and our enterprise-based QL4POMR to describe a Web-Based gateway for healthcare services connectivity. Our prototype has reported success in connecting to the HL7 FHIR medical record and the OpenTarget biomedical repositories.

Keywords: connected health, problem-oriented healthcare record, SOAP, QL4POMR, typegraphQL

Procedia PDF Downloads 70
23 An Enhanced MEIT Approach for Itemset Mining Using Levelwise Pruning

Authors: Tanvi P. Patel, Warish D. Patel

Abstract:

Association rule mining forms the core of data mining and it is termed as one of the well-known methodologies of data mining. Objectives of mining is to find interesting correlations, frequent patterns, associations or casual structures among sets of items in the transaction databases or other data repositories. Hence, association rule mining is imperative to mine patterns and then generate rules from these obtained patterns. For efficient targeted query processing, finding frequent patterns and itemset mining, there is an efficient way to generate an itemset tree structure named Memory Efficient Itemset Tree. Memory efficient IT is efficient for storing itemsets, but takes more time as compare to traditional IT. The proposed strategy generates maximal frequent itemsets from memory efficient itemset tree by using levelwise pruning. For that firstly pre-pruning of items based on minimum support count is carried out followed by itemset tree reconstruction. By having maximal frequent itemsets, less number of patterns are generated as well as tree size is also reduced as compared to MEIT. Therefore, an enhanced approach of memory efficient IT proposed here, helps to optimize main memory overhead as well as reduce processing time.

Keywords: association rule mining, itemset mining, itemset tree, meit, maximal frequent pattern

Procedia PDF Downloads 346
22 Personas Help Understand Users’ Needs, Goals and Desires in an Online Institutional Repository

Authors: Maha ALjohani, James Blustein

Abstract:

Communicating users' needs, goals and problems help designers and developers overcome challenges faced by end users. Personas are used to represent end users’ needs. In our research, creating personas allowed the following questions to be answered: Who are the potential user groups? What do they want to achieve by using the service? What are the problems that users face? What should the service provide to them? To develop realistic personas, we conducted a focus group discussion with undergraduate and graduate students and also interviewed a university librarian. The personas were created to help evaluating the Institutional Repository that is based on the DSpace system. The profiles helped to communicate users' needs, abilities, tasks, and problems, and the task scenarios used in the heuristic evaluation were based on these personas. Four personas resulted of a focus group discussion with undergraduate and graduate students and from interviewing a university librarian. We then used these personas to create focused task-scenarios for a heuristic evaluation on the system interface to ensure that it met users' needs, goals, problems and desires. In this paper, we present the process that we used to create the personas that led to devise the task scenarios used in the heuristic evaluation as a follow up study of the DSpace university repository.

Keywords: heuristic evaluation, institutional repositories, user experience, human computer interaction, user profiles, personas, task scenarios, heuristics

Procedia PDF Downloads 478
21 Agile Methodology for Modeling and Design of Data Warehouses -AM4DW-

Authors: Nieto Bernal Wilson, Carmona Suarez Edgar

Abstract:

The organizations have structured and unstructured information in different formats, sources, and systems. Part of these come from ERP under OLTP processing that support the information system, however these organizations in OLAP processing level, presented some deficiencies, part of this problematic lies in that does not exist interesting into extract knowledge from their data sources, as also the absence of operational capabilities to tackle with these kind of projects.  Data Warehouse and its applications are considered as non-proprietary tools, which are of great interest to business intelligence, since they are repositories basis for creating models or patterns (behavior of customers, suppliers, products, social networks and genomics) and facilitate corporate decision making and research. The following paper present a structured methodology, simple, inspired from the agile development models as Scrum, XP and AUP. Also the models object relational, spatial data models, and the base line of data modeling under UML and Big data, from this way sought to deliver an agile methodology for the developing of data warehouses, simple and of easy application. The methodology naturally take into account the application of process for the respectively information analysis, visualization and data mining, particularly for patterns generation and derived models from the objects facts structured.

Keywords: data warehouse, model data, big data, object fact, object relational fact, process developed data warehouse

Procedia PDF Downloads 384
20 Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa

Authors: Simon Arunga, Teddy Kwaga, Rita Kageni, Michael Gichangi, Nyawira Mwangi, Fred Kagwa, Rogers Mwavu, Amos Baryashaba, Luis F. Nakayama, Katharine Morley, Michael Morley, Leo A. Celi, Jessica Haberer, Celestino Obua

Abstract:

Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data.

Keywords: retina images, MoDRIA, image repository, African database

Procedia PDF Downloads 89
19 From Colonial Outpost to Cultural India: Folk Epics of India

Authors: Jyoti Brahma

Abstract:

Folk epics of India are found in various Indian languages. The study of folk epics and its importance in folkloristic study in India came into prominence only during the nineteenth century. The British administrators and missionaries collected and documented folk epics from various parts of the country. The paper is an attempt to investigate how colonial outpost appears to penetrate the interiors of Indian land and society and triggered off the Indian Renaissance. It takes into account the compositions of the epics of India and the attention it received during the nineteenth century, which in turn gave, rise to the national consciousness shaping the culture of India. Composed as oral traditions these folk epics are now seen as repositories of historical consciousness whereas in earlier times societies without literacy were said to be without history. So, there is an urgent need to re-examine the British impact on Indian literary traditions. The Bhakti poets through their nuanced responses in their efforts to change the behavior of Indian society gives us the perfect example of deferment in the clear cut distinction between the folk and the classical in the context of India. It evades a pure categorization and classification of the classical and constitutes part of the folk traditions of the cultural heritage of India. Therefore, the ethical question of what is ontologically known as ordinary discourse in the case of the “folk” forms metaphors and folk language gains importance once more. The paper also thus seeks simultaneously to outline the significant factors responsible for shaping the destiny of folklore in South India particularly the four political states of the Indian Union: Andhra Pradesh, Karnataka, Kerala and Tamil Nadu, what could be termed as South Indian “cultural zones”.

Keywords: colonial, folk, folklore, tradition

Procedia PDF Downloads 291
18 The Discussions of Love, Determinism, and Providence in Ibn Sina (Avicenna) and al-Kirmani

Authors: Maria De Cillis

Abstract:

This paper addresses the subject of love in two of the most prominent Islamic philosophers: Ibn Sīnā (known in the Latin World as Avicenna d. 1037) Avicenna and al-Kirmānī (DC 1021). By surveying the connection that the concept of love entertains with the notions of divine providence and determinism in the luminaries’ theoretical systems, the present paper highlights differences and similarities in their respective approaches to the subjects. Through a thorough analysis of primary and secondary literature, it will be shown that Avicenna’s thought, which is mainly informed by the Aristotelian and Farābīan metaphysical and cosmological stances, is also integrated with mystical underpinnings. Particularly, in Avicenna’s Risāla fī’l-ʿishq love becomes the expression of the divine providence which operates through the intellectual striving the souls undertake in their desire to return to their First Cause. Love is also portrayed as an instrument helping the divine decree to remain unadulterated by way of keeping existing beings within their species and genera as well as an instrument which is employed by God to know and be known. This paper also discusses that if on the one hand, al-Kirmānī speaks of love as the Aristotelian and Farābian motive-force spurring existents to achieve perfection and as a tool which facilitates the status quo of divine creation, on the other hand, he remains steadily positioned within Ismā‘īlī and Neoplatonic paradigms: the return of all loving-beings to their Source is interrupted at the level of the first Intellect, whilst God remains inaccessible and ineffable. By investigating his opus magnum, the Rāḥat al-ʿaql, we shall highlight how al-Kirmānī also emphasizes the notion of divine providence which allows humans to attain their ultimate completeness by following the teachings of the Imams, repositories of the knowledge necessary to serve the unreachable deity.

Keywords: Avicenna, determinism, love, al-Kirmani, Ismaili philosophy

Procedia PDF Downloads 204
17 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support

Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz

Abstract:

The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.

Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.

Procedia PDF Downloads 91
16 Qualitative and Quantitative Methods in Multidisciplinary Fields Collection Development

Authors: Hui Wang

Abstract:

Traditional collection building approaches are limited in breadth and scope and are not necessarily suitable for multidisciplinary fields development in the institutes of the Chinese Academy of Sciences. The increasing of multidisciplinary fields researches require a viable approach to collection development in these libraries. This study uses qualitative and quantitative analysis to assess collection. The quantitative analysis consists of three levels of evaluation, which including realistic demand, potential demand and trend demand analysis. For one institute, three samples were separately selected from the object institute, more than one international top institutes in highly relative research fields and future research hotspots. Each sample contains an appropriate number of papers published in recent five years. Several keywords and the organization names were reasonably combined to search in commercial databases and the institutional repositories. The publishing information and citations in the bibliographies of these papers were selected to build the dataset. One weighted evaluation model and citation analysis were used to calculate the demand intensity index of every journal and book. Principal Investigator selector and database traffic provide a qualitative evidence to describe the demand frequency. The demand intensity, demand frequency and academic committee recommendations were comprehensively considered to recommend collection development. The collection gaps or weaknesses were ascertained by comparing the current collection and the recommend collection. This approach was applied in more than 80 institutes’ libraries in Chinese Academy of Sciences in the past three years. The evaluation results provided an important evidence for collections building in the second year. The latest user survey results showed that the updated collection’s capacity to support research in a multidisciplinary subject area have increased significantly.

Keywords: citation analysis, collection assessment, collection development, quantitative analysis

Procedia PDF Downloads 179
15 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases

Authors: Jeena Gupta

Abstract:

In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.

Keywords: epigenetics, spices, phytochemicals, fenchone

Procedia PDF Downloads 127
14 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 137
13 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 126
12 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model

Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers

Abstract:

Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.

Keywords: single pore, reactive transport, calcite system, moving boundary

Procedia PDF Downloads 346
11 Building Information Modeling-Based Information Exchange to Support Facilities Management Systems

Authors: Sandra T. Matarneh, Mark Danso-Amoako, Salam Al-Bizri, Mark Gaterell

Abstract:

Today’s facilities are ever more sophisticated and the need for available and reliable information for operation and maintenance activities is vital. The key challenge for facilities managers is to have real-time accurate and complete information to perform their day-to-day activities and to provide their senior management with accurate information for decision-making process. Currently, there are various technology platforms, data repositories, or database systems such as Computer-Aided Facility Management (CAFM) that are used for these purposes in different facilities. In most current practices, the data is extracted from paper construction documents and is re-entered manually in one of these computerized information systems. Construction Operations Building information exchange (COBie), is a non-proprietary data format that contains the asset non-geometric data which was captured and collected during the design and construction phases for owners and facility managers use. Recently software vendors developed add-in applications to generate COBie spreadsheet automatically. However, most of these add-in applications are capable of generating a limited amount of COBie data, in which considerable time is still required to enter the remaining data manually to complete the COBie spreadsheet. Some of the data which cannot be generated by these COBie add-ins is essential for facilities manager’s day-to-day activities such as job sheet which includes preventive maintenance schedules. To facilitate a seamless data transfer between BIM models and facilities management systems, we developed a framework that enables automated data generation using the data extracted directly from BIM models to external web database, and then enabling different stakeholders to access to the external web database to enter the required asset data directly to generate a rich COBie spreadsheet that contains most of the required asset data for efficient facilities management operations. The proposed framework is a part of ongoing research and will be demonstrated and validated on a typical university building. Moreover, the proposed framework supplements the existing body of knowledge in facilities management domain by providing a novel framework that facilitates seamless data transfer between BIM models and facilities management systems.

Keywords: building information modeling, BIM, facilities management systems, interoperability, information management

Procedia PDF Downloads 92
10 A Web-Based Systems Immunology Toolkit Allowing the Visualization and Comparative Analysis of Publically Available Collective Data to Decipher Immune Regulation in Early Life

Authors: Mahbuba Rahman, Sabri Boughorbel, Scott Presnell, Charlie Quinn, Darawan Rinchai, Damien Chaussabel, Nico Marr

Abstract:

Collections of large-scale datasets made available in public repositories can be used to identify and fill gaps in biomedical knowledge. But first, these data need to be made readily accessible to researchers for analysis and interpretation. Here a collection of transcriptome datasets was made available to investigate the functional programming of human hematopoietic cells in early life. Thirty two datasets were retrieved from the NCBI Gene Expression Omnibus (GEO) and loaded in a custom, interactive web application called the Gene Expression browser (GXB), designed for visualization and query of integrated large-scale data. Multiple sample groupings and gene rank lists were created based on the study design and variables in each dataset. Web links to customized graphical views can be generated by users and subsequently be used to graphically present data in manuscripts for publication. The GXB tool also enables browsing of a single gene across datasets, which can provide information on the role of a given molecule across biological systems. The dataset collection is available online. As a proof-of-principle, one of the datasets (GSE25087) was re-analyzed to identify genes that are differentially expressed by regulatory T cells in early life. Re-analysis of this dataset and a cross-study comparison using multiple other datasets in the above mentioned collection revealed that PMCH, a gene encoding a precursor of melanin-concentrating hormone (MCH), a cyclic neuropeptide, is highly expressed in a variety of other hematopoietic cell types, including neonatal erythroid cells as well as plasmacytoid dendritic cells upon viral infection. Our findings suggest an as yet unrecognized role of MCH in immune regulation, thereby highlighting the unique potential of the curated dataset collection and systems biology approach to generate new hypotheses which can be tested in future mechanistic studies.

Keywords: early-life, GEO datasets, PMCH, interactive query, systems biology

Procedia PDF Downloads 269
9 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 177
8 Integration of “FAIR” Data Principles in Longitudinal Mental Health Research in Africa: Lessons from a Landscape Analysis

Authors: Bylhah Mugotitsa, Jim Todd, Agnes Kiragga, Jay Greenfield, Evans Omondi, Lukoye Atwoli, Reinpeter Momanyi

Abstract:

The INSPIRE network aims to build an open, ethical, sustainable, and FAIR (Findable, Accessible, Interoperable, Reusable) data science platform, particularly for longitudinal mental health (MH) data. While studies have been done at the clinical and population level, there still exists limitations in data and research in LMICs, which pose a risk of underrepresentation of mental disorders. It is vital to examine the existing longitudinal MH data, focusing on how FAIR datasets are. This landscape analysis aimed to provide both overall level of evidence of availability of longitudinal datasets and degree of consistency in longitudinal studies conducted. Utilizing prompters proved instrumental in streamlining the analysis process, facilitating access, crafting code snippets, categorization, and analysis of extensive data repositories related to depression, anxiety, and psychosis in Africa. While leveraging artificial intelligence (AI), we filtered through over 18,000 scientific papers spanning from 1970 to 2023. This AI-driven approach enabled the identification of 228 longitudinal research papers meeting inclusion criteria. Quality assurance revealed 10% incorrectly identified articles and 2 duplicates, underscoring the prevalence of longitudinal MH research in South Africa, focusing on depression. From the analysis, evaluating data and metadata adherence to FAIR principles remains crucial for enhancing accessibility and quality of MH research in Africa. While AI has the potential to enhance research processes, challenges such as privacy concerns and data security risks must be addressed. Ethical and equity considerations in data sharing and reuse are also vital. There’s need for collaborative efforts across disciplinary and national boundaries to improve the Findability and Accessibility of data. Current efforts should also focus on creating integrated data resources and tools to improve Interoperability and Reusability of MH data. Practical steps for researchers include careful study planning, data preservation, machine-actionable metadata, and promoting data reuse to advance science and improve equity. Metrics and recognition should be established to incentivize adherence to FAIR principles in MH research

Keywords: longitudinal mental health research, data sharing, fair data principles, Africa, landscape analysis

Procedia PDF Downloads 33
7 Personalized Infectious Disease Risk Prediction System: A Knowledge Model

Authors: Retno A. Vinarti, Lucy M. Hederman

Abstract:

This research describes a knowledge model for a system which give personalized alert to users about infectious disease risks in the context of weather, location and time. The knowledge model is based on established epidemiological concepts augmented by information gleaned from infection-related data repositories. The existing disease risk prediction research has more focuses on utilizing raw historical data and yield seasonal patterns of infectious disease risk emergence. This research incorporates both data and epidemiological concepts gathered from Atlas of Human Infectious Disease (AHID) and Centre of Disease Control (CDC) as basic reasoning of infectious disease risk prediction. Using CommonKADS methodology, the disease risk prediction task is an assignment synthetic task, starting from knowledge identification through specification, refinement to implementation. First, knowledge is gathered from AHID primarily from the epidemiology and risk group chapters for each infectious disease. The result of this stage is five major elements (Person, Infectious Disease, Weather, Location and Time) and their properties. At the knowledge specification stage, the initial tree model of each element and detailed relationships are produced. This research also includes a validation step as part of knowledge refinement: on the basis that the best model is formed using the most common features, Frequency-based Selection (FBS) is applied. The portion of the Infectious Disease risk model relating to Person comes out strongest, with Location next, and Weather weaker. For Person attribute, Age is the strongest, Activity and Habits are moderate, and Blood type is weakest. At the Location attribute, General category (e.g. continents, region, country, and island) results much stronger than Specific category (i.e. terrain feature). For Weather attribute, Less Precise category (i.e. season) comes out stronger than Precise category (i.e. exact temperature or humidity interval). However, given that some infectious diseases are significantly more serious than others, a frequency based metric may not be appropriate. Future work will incorporate epidemiological measurements of disease seriousness (e.g. odds ratio, hazard ratio and fatality rate) into the validation metrics. This research is limited to modelling existing knowledge about epidemiology and chain of infection concepts. Further step, verification in knowledge refinement stage, might cause some minor changes on the shape of tree.

Keywords: epidemiology, knowledge modelling, infectious disease, prediction, risk

Procedia PDF Downloads 210
6 Data Mining in Healthcare for Predictive Analytics

Authors: Ruzanna Muradyan

Abstract:

Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.

Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health

Procedia PDF Downloads 33
5 Re-interpreting Ruskin with Respect to the Wall

Authors: Anjali Sadanand, R. V. Nagarajan

Abstract:

Architecture morphs with advances in technology and the roof, wall, and floor as basic elements of a building, follow in redefining themselves over time. Their contribution is bound by time and held by design principles that deal with function, sturdiness, and beauty. Architecture engages with people to give joy through its form, material, design structure, and spatial qualities. This paper attempts to re-interpret John Ruskin’s “Seven lamps of Architecture” in the context of the architecture of the modern and present period. The paper focuses on the “wall” as an element of study in this context. Four of Ruskin’s seven lamps will be discussed, namely beauty, truth, life, and memory, through examples of architecture ranging from modernism to contemporary architecture of today. The study will focus on the relevance of Ruskin’s principles to the “wall” in specific, in buildings of different materials and over a range of typologies from all parts of the world. Two examples will be analyzed for each lamp. It will be shown that in each case, there is relevance to the significance of Ruskin’s lamps in modern and contemporary architecture. Nature to which Ruskin alludes to for his lamp of “beauty” is found in the different expressions of interpretation used by Corbusier in his Villa Stein façade based on proportion found in nature and in the direct expression of Toyo Ito in his translation of an understanding of the structure of trees into his façade design of the showroom for a Japanese bag boutique. “Truth” is shown in Mies van der Rohe’s Crown Hall building with its clarity of material and structure and Studio Mumbai’s Palmyra House, which celebrates the use of natural materials and local craftsmanship. “Life” is reviewed with a sustainable house in Kerala by Ashrams Ravi and Alvar Aalto’s summer house, which illustrate walls as repositories of intellectual thought and craft. “Memory” is discussed with Charles Correa’s Jawahar Kala Kendra and Venturi’s Vana Venturi house and discloses facades as text in the context of its materiality and iconography. Beauty is reviewed in Villa Stein and Toyo Ito’s Branded Retail building in Tokyo. The paper thus concludes that Ruskin’s Lamps can be interpreted in today’s context and add richness to meaning to the understanding of architecture.

Keywords: beauty, design, facade, modernism

Procedia PDF Downloads 97
4 Disaggregating Communities and the Making of Factional States: Evidence from Joint Forest Management in Sundarban, India

Authors: Amrita Sen

Abstract:

In the face of a growing insurgent movement and the perceived failure of the state and the market towards sustainable resource management, a range of decentralized forest management policies was formulated in the last two decades, which recognized the need for community representations within the statutory methods of forest management. The recognition conceded on the virtues of ecological sustainability and traditional environmental knowledge, which were considered to be the principal repositories of the forest dependent communities. The present study, in the light of empirical insights, reflects on the contemporary disjunctions between the preconceived communitarian ethic in environmentalism and the lived reality of forest based life-worlds. Many of the popular as well as dominant ideologies, which have historically shaped the conceptual and theoretical understanding of sociology, needs further perusal in the context of the emerging contours of empirical knowledge, which lends opportunities for substantive reworking and analysis. The image of the community appears to be one of those concepts, an identity which has for long defined perspectives and processes associated with people living together harmoniously in small physical spaces. Through an ethnographic account of the implementation of Joint Forest Management (JFM) in a forest fringe village in Sundarban, the study explores the ways in which the idea of ‘community’ gets transformed through the process of state-making, rendering the necessity of its departure from the standard, conventional definition of homogeneity and internal equity. The study necessitates an attention towards the anthropology of micro-politics, disaggregating an essentially constructivist anthropology of ‘collective identities’, which can render the visibility of political mobilizations plausible within the seemingly culturalist production of communities. The two critical questions that the paper seeks to ask in this context are: how the ‘local’ is constituted within community based conservation practices? Within the efforts of collaborative forest management, how accurately does the depiction of ‘indigenous environmental knowledge’, subscribe to its role of sustainable conservation practices? Reflecting on the execution of JFM in Sundarban, the study critically explores the ways in which the state ceases to be ‘trans-national’ and interacts with the rural life-worlds through its local factions. Simultaneously, the study attempts to articulate the scope of constructing a competing representation of community, shaped by increasing political negotiations and bureaucratic alignments which strains against the usual preoccupations with tradition primordiality and non material culture as well as the amorous construction of indigeneity.

Keywords: community, environmentalism, JFM, state-making, identities, indigenous

Procedia PDF Downloads 177
3 Predicting Long-Term Performance of Concrete under Sulfate Attack

Authors: Elakneswaran Yogarajah, Toyoharu Nawa, Eiji Owaki

Abstract:

Cement-based materials have been using in various reinforced concrete structural components as well as in nuclear waste repositories. The sulfate attack has been an environmental issue for cement-based materials exposed to sulfate bearing groundwater or soils, and it plays an important role in the durability of concrete structures. The reaction between penetrating sulfate ions and cement hydrates can result in swelling, spalling and cracking of cement matrix in concrete. These processes induce a reduction of mechanical properties and a decrease of service life of an affected structure. It has been identified that the precipitation of secondary sulfate bearing phases such as ettringite, gypsum, and thaumasite can cause the damage. Furthermore, crystallization of soluble salts such as sodium sulfate crystals induces degradation due to formation and phase changes. Crystallization of mirabilite (Na₂SO₄:10H₂O) and thenardite (Na₂SO₄) or their phase changes (mirabilite to thenardite or vice versa) due to temperature or sodium sulfate concentration do not involve any chemical interaction with cement hydrates. Over the past couple of decades, an intensive work has been carried out on sulfate attack in cement-based materials. However, there are several uncertainties still exist regarding the mechanism for the damage of concrete in sulfate environments. In this study, modelling work has been conducted to investigate the chemical degradation of cementitious materials in various sulfate environments. Both internal and external sulfate attack are considered for the simulation. In the internal sulfate attack, hydrate assemblage and pore solution chemistry of co-hydrating Portland cement (PC) and slag mixing with sodium sulfate solution are calculated to determine the degradation of the PC and slag-blended cementitious materials. Pitzer interactions coefficients were used to calculate the activity coefficients of solution chemistry at high ionic strength. The deterioration mechanism of co-hydrating cementitious materials with 25% of Na₂SO₄ by weight is the formation of mirabilite crystals and ettringite. Their formation strongly depends on sodium sulfate concentration and temperature. For the external sulfate attack, the deterioration of various types of cementitious materials under external sulfate ingress is simulated through reactive transport model. The reactive transport model is verified with experimental data in terms of phase assemblage of various cementitious materials with spatial distribution for different sulfate solution. Finally, the reactive transport model is used to predict the long-term performance of cementitious materials exposed to 10% of Na₂SO₄ for 1000 years. The dissolution of cement hydrates and secondary formation of sulfate-bearing products mainly ettringite are the dominant degradation mechanisms, but not the sodium sulfate crystallization.

Keywords: thermodynamic calculations, reactive transport, radioactive waste disposal, PHREEQC

Procedia PDF Downloads 136
2 Via ad Reducendam Intensitatem Energiae Industrialis in Provincia Sino ad Conservationem Energiae

Authors: John Doe

Abstract:

This paper presents the research project “Escape Through Culture”, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020 and the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The project implementation is assumed by three partners, (1) the Computer Technology Institute and Press "Diophantus" (CTI), experienced with the design and implementation of serious games, natural language processing and ICT in education, (2) the Laboratory of Environmental Communication and Audiovisual Documentation (LECAD), part of the University of Thessaly, Department of Architecture, which is experienced with the study of creative transformation and reframing of the urban and environmental multimodal experiences through the use of AR and VR technologies, and (3) “Apoplou”, an IT Company with experience in the implementation of interactive digital applications. The research project proposes the design of innovative infrastructure of digital educational escape games for mobile devices and computers, with the use of Virtual Reality and Augmented Reality for the promotion of Greek cultural heritage in Greece and abroad. In particular, the project advocates the combination of Greek cultural heritage and literature, digital technologies advancements and the implementation of innovative gamifying practices. The cultural experience of the players will take place in 3 layers: (1) In space: the digital games produced are going to utilize the dual character of the space as a cultural landscape (the real space - landscape but also the space - landscape as presented with the technologies of augmented reality and virtual reality). (2) In literary texts: the selected texts of Greek writers will support the sense of place and the multi-sensory involvement of the user, through the context of space-time, language and cultural characteristics. (3) In the philosophy of the "escape game" tool: whether played in a computer environment, indoors or outdoors, the spatial experience is one of the key components of escape games. The innovation of the project lies both in the junction of Augmented/Virtual Reality with the promotion of cultural points of interest, as well as in the interactive, gamified practices of literary texts. The digital escape game infrastructure will be highly interactive, integrating the projection of Greek landscape cultural elements and digital literary text analysis, supporting the creation of escape games, establishing and highlighting new playful ways of experiencing iconic cultural places, such as Elefsina, Skiathos etc. The literary texts’ content will relate to specific elements of the Greek cultural heritage depicted by prominent Greek writers and poets. The majority of the texts will originate from Greek educational content available in digital libraries and repositories developed and maintained by CTI. The escape games produced will be available for use during educational field trips, thematic tourism holidays, etc. In this paper, the methodology adopted for infrastructure development will be presented. The research is based on theories of place, gamification, gaming development, making use of corpus linguistics concepts and digital humanities practices for the compilation and the analysis of literary texts.

Keywords: escape games, cultural landscapes, gamification, digital humanities, literature

Procedia PDF Downloads 200
1 A Comparative Evaluation of Cognitive Load Management: Case Study of Postgraduate Business Students

Authors: Kavita Goel, Donald Winchester

Abstract:

In a world of information overload and work complexities, academics often struggle to create an online instructional environment enabling efficient and effective student learning. Research has established that students’ learning styles are different, some learn faster when taught using audio and visual methods. Attributes like prior knowledge and mental effort affect their learning. ‘Cognitive load theory’, opines learners have limited processing capacity. Cognitive load depends on the learner’s prior knowledge, the complexity of content and tasks, and instructional environment. Hence, the proper allocation of cognitive resources is critical for students’ learning. Consequently, a lecturer needs to understand the limits and strengths of the human learning processes, various learning styles of students, and accommodate these requirements while designing online assessments. As acknowledged in the cognitive load theory literature, visual and auditory explanations of worked examples potentially lead to a reduction of cognitive load (effort) and increased facilitation of learning when compared to conventional sequential text problem solving. This will help learner to utilize both subcomponents of their working memory. Instructional design changes were introduced at the case site for the delivery of the postgraduate business subjects. To make effective use of auditory and visual modalities, video recorded lectures, and key concept webinars were delivered to students. Videos were prepared to free up student limited working memory from irrelevant mental effort as all elements in a visual screening can be viewed simultaneously, processed quickly, and facilitates greater psychological processing efficiency. Most case study students in the postgraduate programs are adults, working full-time at higher management levels, and studying part-time. Their learning style and needs are different from other tertiary students. The purpose of the audio and visual interventions was to lower the students cognitive load and provide an online environment supportive to their efficient learning. These changes were expected to impact the student’s learning experience, their academic performance and retention favourably. This paper posits that these changes to instruction design facilitates students to integrate new knowledge into their long-term memory. A mixed methods case study methodology was used in this investigation. Primary data were collected from interviews and survey(s) of students and academics. Secondary data were collected from the organisation’s databases and reports. Some evidence was found that the academic performance of students does improve when new instructional design changes are introduced although not statistically significant. However, the overall grade distribution of student’s academic performance has changed and skewed higher which shows deeper understanding of the content. It was identified from feedback received from students that recorded webinars served as better learning aids than material with text alone, especially with more complex content. The recorded webinars on the subject content and assessments provides flexibility to students to access this material any time from repositories, many times, and this enhances students learning style. Visual and audio information enters student’s working memory more effectively. Also as each assessment included the application of the concepts, conceptual knowledge interacted with the pre-existing schema in the long-term memory and lowered student’s cognitive load.

Keywords: cognitive load theory, learning style, instructional environment, working memory

Procedia PDF Downloads 119