Search results for: arrhythmia database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1627

Search results for: arrhythmia database

1327 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 63
1326 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 168
1325 Making the Right Call for Falls: Evaluating the Efficacy of a Multi-Faceted Trust Wide Approach to Improving Patient Safety Post Falls

Authors: Jawaad Saleem, Hannah Wright, Peter Sommerville, Adrian Hopper

Abstract:

Introduction: Inpatient falls are the most commonly reported patient safety incidents, and carry a significant burden on resources, morbidity, and mortality. Ensuring adequate post falls management of patients by staff is therefore paramount to maintaining patient safety especially in out of hours and resource stretched settings. Aims: This quality improvement project aims to improve the current practice of falls management at Guys St Thomas Hospital, London as compared to our 2016 Quality Improvement Project findings. Furthermore, it looks to increase current junior doctors confidence in managing falls and their use of new guidance protocols. Methods: Multifaceted Interventions implemented included: the development of new trust wide guidelines detailing management pathways for patients post falls, available for intranet access. Furthermore, the production of 2000 lanyard cards distributed amongst junior doctors and staff which summarised these guidelines. Additionally, a ‘safety signal’ email was sent from the Trust chief medical officer to all staff raising awareness of falls and the guidelines. Formal falls teaching was also implemented for new doctors at induction. Using an established incident database, 189 consecutive falls in 2017were retrospectively analysed electronically to assess and compared to the variables measured in 2016 post interventions. A separate serious incident database was used to analyse 50 falls from May 2015 to March 2018 to ascertain the statistical significance of the impact of our interventions on serious incidents. A similar questionnaire for the 2017 cohort of foundation year one (FY1) doctors was performed and compared to 2016 results. Results: Questionnaire data demonstrated improved awareness and utility of guidelines and increased confidence as well as an increase in training. 97% of FY1 trainees felt that the interventions had increased their awareness of the impact of falls on patients in the trust. Data from the incident database demonstrated the time to review patients post fall had decreased from an average of 130 to 86 minutes. Improvement was also demonstrated in the reduced time to order and schedule X-ray and CT imaging, 3 and 5 hours respectively. Data from the serious incident database show that ‘the time from fall until harm was detected’ was statistically significantly lower (P = 0.044) post intervention. We also showed the incidence of significant delays in detecting harm ( > 10 hours) reduced post intervention. Conclusions: Our interventions have helped to significantly reduce the average time to assess, order and schedule appropriate imaging post falls. Delays of over ten hours to detect serious injuries after falls were commonplace; since the intervention, their frequency has markedly reduced. We suggest this will lead to identifying patient harm sooner, reduced clinical incidents relating to falls and thus improve overall patient safety. Our interventions have also helped increase clinical staff confidence, management, and awareness of falls in the trust. Next steps include expanding teaching sessions, improving multidisciplinary team involvement to aid this improvement.

Keywords: patient safety, quality improvement, serious incidents, falls, clinical care

Procedia PDF Downloads 104
1324 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 46
1323 Analysis of the Effect of Increased Self-Awareness on the Amount of Food Thrown Away

Authors: Agnieszka Dubiel, Artur Grabowski, Tomasz Przerywacz, Mateusz Roganowicz, Patrycja Zioty

Abstract:

Food waste is one of the most significant challenges humanity is facing nowadays. Every year, reports from global organizations show the scale of the phenomenon, although society's awareness is still insufficient. One-third of the food produced in the world is wasted at various points in the food supply chain. Wastes are present from the delivery through the food preparation and distribution to the end of the sale and consumption. The first step in understanding and resisting the phenomenon is a thorough analysis of the everyday behaviors of humanity. This concept is understood as finding the correlation between the type of food and the reason for throwing it out and wasting it. Those actions were identified as a critical step in the start of work to develop technology to prevent food waste. In this paper, the problem mentioned above was analyzed by focusing on the inhabitants of Central Europe, especially Poland, aged 20-30. This paper provides an insight into collecting data through dedicated software and an organized database. The proposed database contains information on the amount, type, and reasons for wasting food in households. A literature review supported the work to answer research questions, compare the situation in Poland with the problem analyzed in other countries, and find research gaps. The proposed article examines the cause of food waste and its quantity in detail. This review complements previous reviews by emphasizing social and economic innovation in Poland's food waste management. The paper recommends a course of action for future research on food waste management and prevention related to the handling and disposal of food, emphasizing households, i.e., the last link in the supply chain.

Keywords: food waste, food waste reduction, consumer food waste, human-food interaction

Procedia PDF Downloads 81
1322 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur

Abstract:

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Keywords: avatar, dictionary, HamNoSys, hearing impaired, Indian sign language (ISL), sign language

Procedia PDF Downloads 200
1321 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems

Authors: Sabina Auhunas

Abstract:

These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.

Keywords: content analysis, COVID-19, scientometrics, text mining

Procedia PDF Downloads 92
1320 Design and Optimization of a Small Hydraulic Propeller Turbine

Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink

Abstract:

A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.

Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design

Procedia PDF Downloads 121
1319 Hsa-miR-192-5p, and Hsa-miR-129-5p Prominent Biomarkers in Regulation Glioblastoma Cancer Stem Cells Genes Microenvironment

Authors: Rasha Ahmadi

Abstract:

Glioblastoma is one of the most frequent brain malignancies, having a high mortality rate and limited survival in individuals with this malignancy. Despite different treatments and surgery, recurrence of glioblastoma cancer stem cells may arise as a subsequent tumor. For this reason, it is crucial to research the markers associated with glioblastoma stem cells and specifically their microenvironment. In this study, using bioinformatics analysis, we analyzed and nominated genes in the microenvironment pathways of glioblastoma stem cells. In this study, an appropriate database was selected for analysis by referring to the GEO database. This dataset comprised gene expression patterns in stem cells derived from glioblastoma patients. Gene clusters were divided as high and low expression. Enrichment databases such as Enrichr, STRING, and GEPIA were utilized to analyze the data appropriately. Finally, we extracted the potential genes 2700 high-expression and 1100 low-expression genes are implicated in the metabolic pathways of glioblastoma cancer progression. Cellular senescence, MAPK, TNF, hypoxia, zimosterol biosynthesis, and phosphatidylinositol metabolism pathways were substantially expressed and the metabolic pathways were downregulated. After assessing the association between protein networks, MSMP, SOX2, FGD4 ,and CNTNAP3 genes with high expression and DMKN and SBSN genes with low were selected. All of these genes were observed in the survival curve, with a survival of fewer than 10 percent over around 15 months. hsa-mir-192-5p, hsa-mir-129-5p, hsa-mir-215-5p, hsa-mir-335-5p, and hsa-mir-340-5p played key function in glioblastoma cancer stem cells microenviroments. We introduced critical genes through integrated and regular bioinformatics studies by assessing the amount of gene expression profile data that can play an important role in targeting genes involved in the energy and microenvironment of glioblastoma cancer stem cells. Have. This study indicated that hsa-mir-192-5p, and hsa-mir-129-5p are appropriate candidates for this.

Keywords: Glioblastoma, Cancer Stem Cells, Biomarker Discovery, Gene Expression Profiles, Bioinformatics Analysis, Tumor Microenvironment

Procedia PDF Downloads 109
1318 Identification and Validation of Co-Dominant Markers for Selection of the CO-4 Anthracnose Disease Resistance Gene in Common Bean Cultivar G2333

Authors: Annet Namusoke, Annet Namayanja, Peter Wasswa, Shakirah Nampijja

Abstract:

Common bean cultivar G2333 which offers broad resistance for anthracnose has been widely used as a source of resistance in breeding for anthracnose resistance. The cultivar is pyramided with three genes namely CO-4, CO-5 and CO-7 and of these three genes, the CO-4 gene has been found to offer the broadest resistance. The main aim of this work was to identify and validate easily assayable PCR based co-dominant molecular markers for selection of the CO-4 gene in segregating populations derived from crosses of G2333 with RWR 1946 and RWR 2075, two commercial Andean cultivars highly susceptible to anthracnose. Marker sequences for the study were obtained by blasting the sequence of the COK-4 gene in the Phaseolus gene database. Primer sequence pairs that were not provided from the Phaseolus gene database were designed by the use of Primer3 software. PCR conditions were optimized and the PCR products were run on 6% HPAGE gel. Results of the polymorphism test indicated that out of 18 identified markers, only two markers namely BM588 and BM211 behaved co-dominantly. Phenotypic evaluation for reaction to anthracnose disease was done by inoculating 21days old seedlings of three parents, F1 and F2 populations with race 7 of Colletotrichum lindemuthianum in the humid chamber. DNA testing of the BM588 marker onto the F2 segregating population of the crosses RWR 1946 x G 2333 and RWR 2075 x G2333 further revealed that the marker BM588 co-segregated with disease resistance with co-dominance of two alleles of 200bp and 400bp, fitting the expected segregation ratio of 1:2:1. The BM588 marker was significantly associated with disease resistance and gave promising results for marker assisted selection of the CO-4 gene in the breeding lines. Activities to validate the BM211 marker are also underway.

Keywords: codominant, Colletotrichum lindemuthianum, MAS, Phaseolus vulgaris

Procedia PDF Downloads 268
1317 Methotrexate Associated Skin Cancer: A Signal Review of Pharmacovigilance Center

Authors: Abdulaziz Alakeel, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Methotrexate (MTX) is an antimetabolite used to treat multiple conditions, including neoplastic diseases, severe psoriasis, and rheumatoid arthritis. Skin cancer is the out-of-control growth of abnormal cells in the epidermis, the outermost skin layer, caused by unrepaired DNA damage that triggers mutations. These mutations lead the skin cells to multiply rapidly and form malignant tumors. The aim of this review is to evaluate the risk of skin cancer associated with the use of methotrexate and to suggest regulatory recommendations if required. Methodology: Signal Detection team at Saudi Food and Drug Authority (SFDA) performed a safety review using National Pharmacovigilance Center (NPC) database as well as the World Health Organization (WHO) VigiBase, alongside with literature screening to retrieve related information for assessing the causality between skin cancer and methotrexate. The search conducted in July 2020. Results: Four published articles support the association seen while searching in literature, a recent randomized control trial published in 2020 revealed a statistically significant increase in skin cancer among MTX users. Another study mentioned methotrexate increases the risk of non-melanoma skin cancer when used in combination with immunosuppressant and biologic agents. In addition, the incidence of melanoma for methotrexate users was 3-fold more than the general population in a cohort study of rheumatoid arthritis patients. The last article estimated the risk of cutaneous malignant melanoma (CMM) in a cohort study shows a statistically significant risk increase for CMM was observed in MTX exposed patients. The WHO database (VigiBase) searched for individual case safety reports (ICSRs) reported for “Skin Cancer” and 'Methotrexate' use, which yielded 121 ICSRs. The initial review revealed that 106 cases are insufficiently documented for proper medical assessment. However, the remaining fifteen cases have extensively evaluated by applying the WHO criteria of causality assessment. As a result, 30 percent of the cases showed that MTX could possibly cause skin cancer; five cases provide unlikely association and five un-assessable cases due to lack of information. The Saudi NPC database searched to retrieve any reported cases for the combined terms methotrexate/skin cancer; however, no local cases reported up to date. The data mining of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by the WHO Uppsala Monitoring Centre to measure the reporting ratio. Positive IC reflects higher statistical association, while negative values translated as a less statistical association, considering the null value equal to zero. Results showed that a combination of 'Methotrexate' and 'Skin cancer' observed more than expected when compared to other medications in the WHO database (IC value is 1.2). Conclusion: The weighted cumulative pieces of evidence identified from global cases, data mining, and published literature are sufficient to support a causal association between the risk of skin cancer and methotrexate. Therefore, health care professionals should be aware of this possible risk and may consider monitoring any signs or symptoms of skin cancer in patients treated with methotrexate.

Keywords: methotrexate, skin cancer, signal detection, pharmacovigilance

Procedia PDF Downloads 92
1316 Expert System: Debugging Using MD5 Process Firewall

Authors: C. U. Om Kumar, S. Kishore, A. Geetha

Abstract:

An Operating system (OS) is software that manages computer hardware and software resources by providing services to computer programs. One of the important user expectations of the operating system is to provide the practice of defending information from unauthorized access, disclosure, modification, inspection, recording or destruction. Operating system is always vulnerable to the attacks of malwares such as computer virus, worm, Trojan horse, backdoors, ransomware, spyware, adware, scareware and more. And so the anti-virus software were created for ensuring security against the prominent computer viruses by applying a dictionary based approach. The anti-virus programs are not always guaranteed to provide security against the new viruses proliferating every day. To clarify this issue and to secure the computer system, our proposed expert system concentrates on authorizing the processes as wanted and unwanted by the administrator for execution. The Expert system maintains a database which consists of hash code of the processes which are to be allowed. These hash codes are generated using MD5 message-digest algorithm which is a widely used cryptographic hash function. The administrator approves the wanted processes that are to be executed in the client in a Local Area Network by implementing Client-Server architecture and only the processes that match with the processes in the database table will be executed by which many malicious processes are restricted from infecting the operating system. The add-on advantage of this proposed Expert system is that it limits CPU usage and minimizes resource utilization. Thus data and information security is ensured by our system along with increased performance of the operating system.

Keywords: virus, worm, Trojan horse, back doors, Ransomware, Spyware, Adware, Scareware, sticky software, process table, MD5, CPU usage and resource utilization

Procedia PDF Downloads 391
1315 Transcriptomine: The Nuclear Receptor Signaling Transcriptome Database

Authors: Scott A. Ochsner, Christopher M. Watkins, Apollo McOwiti, David L. Steffen Lauren B. Becnel, Neil J. McKenna

Abstract:

Understanding signaling by nuclear receptors (NRs) requires an appreciation of their cognate ligand- and tissue-specific transcriptomes. While target gene regulation data are abundant in this field, they reside in hundreds of discrete publications in formats refractory to routine query and analysis and, accordingly, their full value to the NR signaling community has not been realized. One of the mandates of the Nuclear Receptor Signaling Atlas (NURSA) is to facilitate access of the community to existing public datasets. Pursuant to this mandate we are developing a freely-accessible community web resource, Transcriptomine, to bring together the sum total of available expression array and RNA-Seq data points generated by the field in a single location. Transcriptomine currently contains over 25,000,000 gene fold change datapoints from over 1200 contrasts relevant to over 100 NRs, ligands and coregulators in over 200 tissues and cell lines. Transcriptomine is designed to accommodate a spectrum of end users ranging from the bench researcher to those with advanced bioinformatic training. Visualization tools allow users to build custom charts to compare and contrast patterns of gene regulation across different tissues and in response to different ligands. Our resource affords an entirely new paradigm for leveraging gene expression data in the NR signaling field, empowering users to query gene fold changes across diverse regulatory molecules, tissues and cell lines, target genes, biological functions and disease associations, and that would otherwise be prohibitive in terms of time and effort. Transcriptomine will be regularly updated with gene lists from future genome-wide expression array and expression-sequencing datasets in the NR signaling field.

Keywords: target gene database, informatics, gene expression, transcriptomics

Procedia PDF Downloads 252
1314 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 105
1313 Exploring Simple Sequence Repeats within Conserved microRNA Precursors Identified from Tea Expressed Sequence Tag (EST) Database

Authors: Anjan Hazra, Nirjhar Dasgupta, Chandan Sengupta, Sauren Das

Abstract:

Tea (Camellia sinensis) has received substantial attention from the scientific world time to time, not only for its commercial importance, but also for its demand to the health-conscious people across the world for its extensive use as potential sources of antioxidant supplement. These health-benefit traits primarily rely on some regulatory networks of different metabolic pathways. Development of microsatellite markers from the conserved genomic regions is being worthwhile for studying the genetic diversity of closely related species or self-pollinated species. Although several SSR markers have been reported, in tea the trait-specific Simple Sequence Repeats (SSRs) are yet to be identified, which can be used for marker assisted breeding technique. MicroRNAs are endogenous, noncoding, short RNAs directly involved in regulating gene expressions at the post-transcriptional level. It has been found that diversity in miRNA gene interferes the formation of its characteristic hair pin structure and the subsequent function. In the present study, the precursors of small regulatory RNAs (microRNAs) has been fished out from tea Expressed Sequence Tag (EST) database. Furthermore, the simple sequence repeat motifs within the putative miRNA precursor genes are also identified in order to experimentally validate their existence and function. It is already known that genic-SSR markers are very adept and breeder-friendly source for genetic diversity analysis. So, the potential outcome of this in-silico study would provide some novel clues in understanding the miRNA-triggered polymorphic genic expression controlling specific metabolic pathways, accountable for tea quality.

Keywords: micro RNA, simple sequence repeats, tea quality, trait specific marker

Procedia PDF Downloads 281
1312 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 125
1311 Research on Health Emergency Management Based on the Bibliometrics

Authors: Meng-Na Dai, Bao-Fang Wen, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Chang-Hai Tang, Zhi-Qiang Feng, Wen-Qiang Yin

Abstract:

Based on the analysis of literature in the health emergency management in China with recent 10 years, this paper discusses the Chinese current research hotspots, development trends and shortcomings in this field, and provides references for scholars to conduct follow-up research. CNKI(China National Knowledge Infrastructure), Weipu, and Wanfang were the databases of this literature. The key words during the database search were health, emergency, and management with the time from 2009 to 2018. The duplicate, non-academic, and unrelated documents were excluded. 901 articles were included in the literature review database. The main indicators of abstraction were, the number of articles published every year, authors, institutions, periodicals, etc. There are some research findings through the analysis of the literature. Overall, the number of literature in the health emergency management in China has shown a fluctuating downward trend in recent 10 years. Specifically, there is a lack of close cooperation between authors, which has not constituted the core team among them yet. Meanwhile, in this field, the number of high-level periodicals and quality literature is scarce. In addition, there are a lot of research hotspots, such as emergency management system, mechanism research, capacity evaluation index system research, plans and capacity-building research, etc. In the future, we should increase the scientific research funding of the health emergency management, encourage collaborative innovation among authors in multi-disciplinary fields, and create high-quality and high-impact journals in this field. The states should encourage scholars in this field to carry out more academic cooperation and communication with the whole world and improve the research in breadth and depth. Generally speaking, the research in health emergency management in China is still insufficient and needs to be improved.

Keywords: health emergency management, research situation, bibliometrics, literature

Procedia PDF Downloads 116
1310 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 109
1309 Using Linear Logistic Regression to Evaluation the Patient and System Delay and Effective Factors in Mortality of Patients with Acute Myocardial Infarction

Authors: Firouz Amani, Adalat Hoseinian, Sajjad Hakimian

Abstract:

Background: The mortality due to Myocardial Infarction (MI) is often occur during the first hours after onset of symptom. So, for taking the necessary treatment and decreasing the mortality rate, timely visited of the hospital could be effective in this regard. The aim of this study was to investigate the impact of effective factors in mortality of MI patients by using Linear Logistic Regression. Materials and Methods: In this case-control study, all patients with Acute MI who referred to the Ardabil city hospital were studied. All of died patients were considered as the case group (n=27) and we select 27 matched patients without Acute MI as a control group. Data collected for all patients in two groups by a same checklist and then analyzed by SPSS version 24 software using statistical methods. We used the linear logistic regression model to determine the effective factors on mortality of MI patients. Results: The mean age of patients in case group was significantly higher than control group (75.1±11.7 vs. 63.1±11.6, p=0.001).The history of non-cardinal diseases in case group with 44.4% significantly higher than control group with 7.4% (p=0.002).The number of performed PCIs in case group with 40.7% significantly lower than control group with 74.1% (P=0.013). The time distance between hospital admission and performed PCI in case group with 110.9 min was significantly upper than control group with 56 min (P=0.001). The mean of delay time from Onset of symptom to hospital admission (patient delay) and the mean of delay time from hospital admissions to receive treatment (system delay) was similar between two groups. By using logistic regression model we revealed that history of non-cardinal diseases (OR=283) and the number of performed PCIs (OR=24.5) had significant impact on mortality of MI patients in compare to other factors. Conclusion: Results of this study showed that of all studied factors, the number of performed PCIs, history of non-cardinal illness and the interval between onset of symptoms and performed PCI have significant relation with morality of MI patients and other factors were not meaningful. So, doing more studies with a large sample and investigated other involved factors such as smoking, weather and etc. is recommended in future.

Keywords: acute MI, mortality, heart failure, arrhythmia

Procedia PDF Downloads 106
1308 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 276
1307 A Survey of Digital Health Companies: Opportunities and Business Model Challenges

Authors: Iris Xiaohong Quan

Abstract:

The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.

Keywords: digital health, business models, entrepreneurship opportunities, healthcare

Procedia PDF Downloads 157
1306 The Current State Of Human Gait Simulator Development

Authors: Stepanov Ivan, Musalimov Viktor, Monahov Uriy

Abstract:

This report examines the current state of human gait simulator development based on the human hip joint model. This unit will create a database of human gait types, useful for setting up and calibrating mechano devices, as well as the creation of new systems of rehabilitation, exoskeletons and walking robots. The system has ample opportunity to configure the dimensions and stiffness, while maintaining relative simplicity.

Keywords: hip joint, human gait, physiotherapy, simulation

Procedia PDF Downloads 379
1305 Research Trends in Using Virtual Reality for the Analysis and Treatment of Lower-Limb Musculoskeletal Injury of Athletes: A Literature Review

Authors: Hannah K. M. Tang, Muhammad Ateeq, Mark J. Lake, Badr Abdullah, Frederic A. Bezombes

Abstract:

There is little research applying virtual reality (VR) to the treatment of musculoskeletal injury in athletes. This is despite their prevalence, and the implications for physical and psychological health. Nevertheless, developments of wireless VR headsets better facilitate dynamic movement in VR environments (VREs), and more research is expected in this emerging field. This systematic review identified publications that used VR interventions for the analysis or treatment of lower-limb musculoskeletal injury of athletes. It established a search protocol, and through narrative discussion, identified existing trends. Database searches encompassed four term sets: 1) VR systems; 2) musculoskeletal injuries; 3) sporting population; 4) movement outcome analysis. Overall, a total of 126 publications were identified through database searching, and twelve were included in the final analysis and discussion. Many of the studies were pilot and proof of concept work. Seven of the twelve publications were observational studies. However, this may provide preliminary data from which clinical trials will branch. If specified, the focus of the literature was very narrow, with very similar population demographics and injuries. The trends in the literature findings emphasised the role of VR and attentional focus, the strategic manipulation of movement outcomes, and the transfer of skill to the real-world. Causal inferences may have been undermined by flaws, as most studies were limited by the practicality of conducting a two-factor clinical-VR-based study. In conclusion, by assessing the exploratory studies, and combining this with the use of numerous developments, techniques, and tools, a novel application could be established to utilise VR with dynamic movement, for the effective treatment of specific musculoskeletal injuries of athletes.

Keywords: athletes, lower-limb musculoskeletal injury, rehabilitation, return-to-sport, virtual reality

Procedia PDF Downloads 209
1304 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 105
1303 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 292
1302 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 244
1301 Bioinformatics Identification of Rare Codon Clusters in Proteins Structure of HBV

Authors: Abdorrasoul Malekpour, Mohammad Ghorbani Mojtaba Mortazavi, Mohammadreza Fattahi, Mohammad Hassan Meshkibaf, Ali Fakhrzad, Saeid Salehi, Saeideh Zahedi, Amir Ahmadimoghaddam, Parviz Farzadnia Dr., Mohammadreza Hajyani Asl Bs

Abstract:

Hepatitis B as an infectious disease has eight main genotypes (A–H). The aim of this study is to Bioinformatically identify Rare Codon Clusters (RCC) in proteins structure of HBV. For detection of protein family accession numbers (Pfam) of HBV proteins; used of uni-prot database and Pfam search tool were used. Obtained Pfam IDs were analyzed in Sherlocc program and RCCs in HBV proteins were detected. In further, the structures of TrEMBL entries proteins studied in PDB database and 3D structures of the HBV proteins and locations of RCCs were visualized and studied using Swiss PDB Viewer software. Pfam search tool have found nine significant hits and 0 insignificant hits in 3 frames. Results of Pfams studied in the Sherlocc program show this program not identified RCCs in the external core antigen (PF08290) and truncated HBeAg protein (PF08290). By contrast the RCCs become identified in Hepatitis core antigen (PF00906) Large envelope protein S (PF00695), X protein (PF00739), DNA polymerase (viral) N-terminal domain (PF00242) and Protein P (Pf00336). In HBV genome, seven RCC identified that found in hepatitis core antigen, large envelope protein S and DNA polymerase proteins and proteins structures of TrEMBL entries sequences that reported in Sherlocc program outputs are not complete. Based on situation of RCC in structure of HBV proteins, it suggested those RCCs are important in HBV life cycle. We hoped that this study provide a new and deep perspective in protein research and drug design for treatment of HBV.

Keywords: rare codon clusters, hepatitis B virus, bioinformatic study, infectious disease

Procedia PDF Downloads 453
1300 Analysis of Human Toxicity Potential of Major Building Material Production Stage Using Life Cycle Assessment

Authors: Rakhyun Kim, Sungho Tae

Abstract:

Global environmental issues such as abnormal weathers due to global warming, resource depletion, and ecosystem distortions have been escalating due to rapid increase of population growth, and expansion of industrial and economic development. Accordingly, initiatives have been implemented by many countries to protect the environment through indirect regulation methods such as Environmental Product Declaration (EPD), in addition to direct regulations such as various emission standards. Following this trend, life cycle assessment (LCA) techniques that provide quantitative environmental information, such as Human Toxicity Potential (HTP), for buildings are being developed in the construction industry. However, at present, the studies on the environmental database of building materials are not sufficient to provide this support adequately. The purpose of this study is to analysis human toxicity potential of major building material production stage using life cycle assessment. For this purpose, the theoretical consideration of the life cycle assessment and environmental impact category was performed and the direction of the study was set up. That is, the major material in the global warming potential view was drawn against the building and life cycle inventory database was selected. The classification was performed about 17 kinds of substance and impact index, such as human toxicity potential, that it specifies in CML2001. The environmental impact of analysis human toxicity potential for the building material production stage was calculated through the characterization. Meanwhile, the environmental impact of building material in the same category was analyze based on the characterization impact which was calculated in this study. In this study, establishment of environmental impact coefficients of major building material by complying with ISO 14040. Through this, it is believed to effectively support the decisions of stakeholders to improve the environmental performance of buildings and provide a basis for voluntary participation of architects in environment consideration activities.

Keywords: human toxicity potential, major building material, life cycle assessment, production stage

Procedia PDF Downloads 108
1299 A Convolutional Neural Network Based Vehicle Theft Detection, Location, and Reporting System

Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala

Abstract:

One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets especially in the motorist industry, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. Sixty (60) vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.

Keywords: CNN, location identification, tracking, GPS, GSM

Procedia PDF Downloads 130
1298 A Prototype of an Information and Communication Technology Based Intervention Tool for Children with Dyslexia

Authors: Rajlakshmi Guha, Sajjad Ansari, Shazia Nasreen, Hirak Banerjee, Jiaul Paik

Abstract:

Dyslexia is a neurocognitive disorder, affecting around fifteen percent of the Indian population. The symptoms include difficulty in reading alphabet, words, and sentences. This can be difficult at the phonemic or recognition level and may further affect lexical structures. Therapeutic intervention of dyslexic children post assessment is generally done by special educators and psychologists through one on one interaction. Considering the large number of children affected and the scarcity of experts, access to care is limited in India. Moreover, unavailability of resources and timely communication with caregivers add on to the problem of proper intervention. With the development of Educational Technology and its use in India, access to information and care has been improved in such a large and diverse country. In this context, this paper proposes an ICT enabled home-based intervention program for dyslexic children which would support the child, and provide an interactive interface between expert, parents, and students. The paper discusses the details of the database design and system layout of the program. Along with, it also highlights the development of different technical aids required to build out personalized android applications for the Indian dyslexic population. These technical aids include speech database creation for children, automatic speech recognition system, serious game development, and color coded fonts. The paper also emphasizes the games developed to assist the dyslexic child on cognitive training primarily for attention, working memory, and spatial reasoning. In addition, it talks about the specific elements of the interactive intervention tool that makes it effective for home based intervention of dyslexia.

Keywords: Android applications, cognitive training, dyslexia, intervention

Procedia PDF Downloads 274