Search results for: overton database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1637

Search results for: overton database

1367 Characteristics of Tremella fuciformis and Annulohypoxylon stygium for Optimal Cultivation Conditions

Authors: Eun-Ji Lee, Hye-Sung Park, Chan-Jung Lee, Won-Sik Kong

Abstract:

We analyzed the DNA sequence of the ITS (Internal Transcribed Spacer) region of the 18S ribosomal gene and compared it with the gene sequence of T. fuciformis and Hypoxylon sp. in the BLAST database. The sequences of collected T. fuciformis and Hypoxylon sp. have over 99% homology in the T. fuciformis and Hypoxylon sp. sequence BLAST database. In order to select the optimal medium for T. fuciformis, five kinds of a medium such as Potato Dextrose Agar (PDA), Mushroom Complete Medium (MCM), Malt Extract Agar (MEA), Yeast extract (YM), and Compost Extract Dextrose Agar (CDA) were used. T. fuciformis showed the best growth on PDA medium, and Hypoxylon sp. showed the best growth on MCM. So as to investigate the optimum pH and temperature, the pH range was set to pH4 to pH8 and the temperature range was set to 15℃ to 35℃ (5℃ degree intervals). Optimum culture conditions for the T. fuciformis growth were pH5 at 25℃. Hypoxylon sp. were pH6 at 25°C. In order to confirm the most suitable carbon source, we used fructose, galactose, saccharose, soluble starch, inositol, glycerol, xylose, dextrose, lactose, dextrin, Na-CMC, adonitol. Mannitol, mannose, maltose, raffinose, cellobiose, ethanol, salicine, glucose, arabinose. In the optimum carbon source, T. fuciformis is xylose and Hypoxylon sp. is arabinose. Using the column test, we confirmed sawdust a suitable for T. fuciformis, since the composition of sawdust affects the growth of fruiting bodies of T. fuciformis. The sawdust we used is oak tree, pine tree, poplar, birch, cottonseed meal, cottonseed hull. In artificial cultivation of T. fuciformis with sawdust medium, T. fuciformis and Hypoxylon sp. showed fast mycelial growth on mixture of oak tree sawdust, cottonseed hull, and wheat bran.

Keywords: cultivation, optimal condition, tremella fuciformis, nutritional source

Procedia PDF Downloads 210
1366 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 334
1365 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables

Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck

Abstract:

The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.

Keywords: buildings as material banks, building stock, estimation method, interior wall area

Procedia PDF Downloads 30
1364 Variable Refrigerant Flow (VRF) Zonal Load Prediction Using a Transfer Learning-Based Framework

Authors: Junyu Chen, Peng Xu

Abstract:

In the context of global efforts to enhance building energy efficiency, accurate thermal load forecasting is crucial for both device sizing and predictive control. Variable Refrigerant Flow (VRF) systems are widely used in buildings around the world, yet VRF zonal load prediction has received limited attention. Due to differences between VRF zones in building-level prediction methods, zone-level load forecasting could significantly enhance accuracy. Given that modern VRF systems generate high-quality data, this paper introduces transfer learning to leverage this data and further improve prediction performance. This framework also addresses the challenge of predicting load for building zones with no historical data, offering greater accuracy and usability compared to pure white-box models. The study first establishes an initial variable set of VRF zonal building loads and generates a foundational white-box database using EnergyPlus. Key variables for VRF zonal loads are identified using methods including SRRC, PRCC, and Random Forest. XGBoost and LSTM are employed to generate pre-trained black-box models based on the white-box database. Finally, real-world data is incorporated into the pre-trained model using transfer learning to enhance its performance in operational buildings. In this paper, zone-level load prediction was integrated with transfer learning, and a framework was proposed to improve the accuracy and applicability of VRF zonal load prediction.

Keywords: zonal load prediction, variable refrigerant flow (VRF) system, transfer learning, energyplus

Procedia PDF Downloads 28
1363 Classification of Small Towns: Three Methodological Approaches and Their Results

Authors: Jerzy Banski

Abstract:

Small towns represent a key element of settlement structure and serve a number of important functions associated with the servicing of rural areas that surround them. It is in light of this that scientific studies have paid considerable attention to the functional structure of centers of this kind, as well as the relationships with both surrounding rural areas and other urban centers. But a preliminary to such research has typically involved attempts at classifying the urban centers themselves, with this also assisting with the planning and shaping of development policy on different spatial scales. The purpose of the work is to test out the methods underpinning three different classifications of small urban centers, as well as to offer a preliminary interpretation of the outcomes obtained. Research took in 722 settlement units in Poland, granted town rights and populated by fewer than 20,000 inhabitants. A morphologically-based classification making reference to the database of topographic objects as regards land cover within the administrative boundaries of towns and cities was carried out, and it proved possible to distinguish the categories of “housing-estate”, industrial and R&R towns, as well as towns characterized by dichotomy. Equally, a functional/morphological approach taken with the same database allowed for the identification – via an alternative method – of three main categories of small towns (i.e., the monofunctional, multifunctional or oligo functional), which could then be described in far greater detail. A third, multi-criterion classification made simultaneous reference to the conditioning of a structural, a location-related, and an administrative hierarchy-related nature, allowing for distinctions to be drawn between small towns in 9 different categories. The results obtained allow for multifaceted analysis and interpretation of the geographical differentiation characterizing the distribution of Poland’s urban centers across space in the country.

Keywords: small towns, classification, local planning, Poland

Procedia PDF Downloads 87
1362 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 54
1361 The Role of Artificial Intelligence in Criminal Procedure

Authors: Herke Csongor

Abstract:

The artificial intelligence (AI) has been used in the United States of America in the decisionmaking process of the criminal justice system for decades. In the field of law, including criminal law, AI can provide serious assistance in decision-making in many places. The paper reviews four main areas where AI still plays a role in the criminal justice system and where it is expected to play an increasingly important role. The first area is the predictive policing: a number of algorithms are used to prevent the commission of crimes (by predicting potential crime locations or perpetrators). This may include the so-called linking hot-spot analysis, crime linking and the predictive coding. The second area is the Big Data analysis: huge amounts of data sets are already opaque to human activity and therefore unprocessable. Law is one of the largest producers of digital documents (because not only decisions, but nowadays the entire document material is available digitally), and this volume can only and exclusively be handled with the help of computer programs, which the development of AI systems can have an increasing impact on. The third area is the criminal statistical data analysis. The collection of statistical data using traditional methods required enormous human resources. The AI is a huge step forward in that it can analyze the database itself, based on the requested aspects, a collection according to any aspect can be available in a few seconds, and the AI itself can analyze the database and indicate if it finds an important connection either from the point of view of crime prevention or crime detection. Finally, the use of AI during decision-making in both investigative and judicial fields is analyzed in detail. While some are skeptical about the future role of AI in decision-making, many believe that the question is not whether AI will participate in decision-making, but only when and to what extent it will transform the current decision-making system.

Keywords: artificial intelligence, international criminal cooperation, planning and organizing of the investigation, risk assessment

Procedia PDF Downloads 38
1360 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example

Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang

Abstract:

Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.

Keywords: cancer, visualization, database, functional annotation

Procedia PDF Downloads 618
1359 Ebola Virus Glycoprotein Inhibitors from Natural Compounds: Computer-Aided Drug Design

Authors: Driss Cherqaoui, Nouhaila Ait Lahcen, Ismail Hdoufane, Mehdi Oubahmane, Wissal Liman, Christelle Delaite, Mohammed M. Alanazi

Abstract:

The Ebola virus is a highly contagious and deadly pathogen that causes Ebola virus disease. The Ebola virus glycoprotein (EBOV-GP) is a key factor in viral entry into host cells, making it a critical target for therapeutic intervention. Using a combination of computational approaches, this study focuses on the identification of natural compounds that could serve as potent inhibitors of EBOV-GP. The 3D structure of EBOV-GP was selected, with missing residues modeled, and this structure was minimized and equilibrated. Two large natural compound databases, COCONUT and NPASS, were chosen and filtered based on toxicity risks and Lipinski’s Rule of Five to ensure drug-likeness. Following this, a pharmacophore model, built from 22 reported active inhibitors, was employed to refine the selection of compounds with a focus on structural relevance to known Ebola inhibitors. The filtered compounds were subjected to virtual screening via molecular docking, which identified ten promising candidates (five from each database) with strong binding affinities to EBOV-GP. These compounds were then validated through molecular dynamics simulations to evaluate their binding stability and interactions with the target. The top three compounds from each database were further analyzed using ADMET profiling, confirming their favorable pharmacokinetic properties, stability, and safety. These results suggest that the selected compounds have the potential to inhibit EBOV-GP, offering new avenues for antiviral drug development against the Ebola virus.

Keywords: EBOV-GP, Ebola virus glycoprotein, high-throughput drug screening, molecular docking, molecular dynamics, natural compounds, pharmacophore modeling, virtual screening

Procedia PDF Downloads 21
1358 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 350
1357 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 86
1356 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 195
1355 Making the Right Call for Falls: Evaluating the Efficacy of a Multi-Faceted Trust Wide Approach to Improving Patient Safety Post Falls

Authors: Jawaad Saleem, Hannah Wright, Peter Sommerville, Adrian Hopper

Abstract:

Introduction: Inpatient falls are the most commonly reported patient safety incidents, and carry a significant burden on resources, morbidity, and mortality. Ensuring adequate post falls management of patients by staff is therefore paramount to maintaining patient safety especially in out of hours and resource stretched settings. Aims: This quality improvement project aims to improve the current practice of falls management at Guys St Thomas Hospital, London as compared to our 2016 Quality Improvement Project findings. Furthermore, it looks to increase current junior doctors confidence in managing falls and their use of new guidance protocols. Methods: Multifaceted Interventions implemented included: the development of new trust wide guidelines detailing management pathways for patients post falls, available for intranet access. Furthermore, the production of 2000 lanyard cards distributed amongst junior doctors and staff which summarised these guidelines. Additionally, a ‘safety signal’ email was sent from the Trust chief medical officer to all staff raising awareness of falls and the guidelines. Formal falls teaching was also implemented for new doctors at induction. Using an established incident database, 189 consecutive falls in 2017were retrospectively analysed electronically to assess and compared to the variables measured in 2016 post interventions. A separate serious incident database was used to analyse 50 falls from May 2015 to March 2018 to ascertain the statistical significance of the impact of our interventions on serious incidents. A similar questionnaire for the 2017 cohort of foundation year one (FY1) doctors was performed and compared to 2016 results. Results: Questionnaire data demonstrated improved awareness and utility of guidelines and increased confidence as well as an increase in training. 97% of FY1 trainees felt that the interventions had increased their awareness of the impact of falls on patients in the trust. Data from the incident database demonstrated the time to review patients post fall had decreased from an average of 130 to 86 minutes. Improvement was also demonstrated in the reduced time to order and schedule X-ray and CT imaging, 3 and 5 hours respectively. Data from the serious incident database show that ‘the time from fall until harm was detected’ was statistically significantly lower (P = 0.044) post intervention. We also showed the incidence of significant delays in detecting harm ( > 10 hours) reduced post intervention. Conclusions: Our interventions have helped to significantly reduce the average time to assess, order and schedule appropriate imaging post falls. Delays of over ten hours to detect serious injuries after falls were commonplace; since the intervention, their frequency has markedly reduced. We suggest this will lead to identifying patient harm sooner, reduced clinical incidents relating to falls and thus improve overall patient safety. Our interventions have also helped increase clinical staff confidence, management, and awareness of falls in the trust. Next steps include expanding teaching sessions, improving multidisciplinary team involvement to aid this improvement.

Keywords: patient safety, quality improvement, serious incidents, falls, clinical care

Procedia PDF Downloads 124
1354 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 78
1353 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection

Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine

Abstract:

Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.

Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine

Procedia PDF Downloads 267
1352 Analysis of the Effect of Increased Self-Awareness on the Amount of Food Thrown Away

Authors: Agnieszka Dubiel, Artur Grabowski, Tomasz Przerywacz, Mateusz Roganowicz, Patrycja Zioty

Abstract:

Food waste is one of the most significant challenges humanity is facing nowadays. Every year, reports from global organizations show the scale of the phenomenon, although society's awareness is still insufficient. One-third of the food produced in the world is wasted at various points in the food supply chain. Wastes are present from the delivery through the food preparation and distribution to the end of the sale and consumption. The first step in understanding and resisting the phenomenon is a thorough analysis of the everyday behaviors of humanity. This concept is understood as finding the correlation between the type of food and the reason for throwing it out and wasting it. Those actions were identified as a critical step in the start of work to develop technology to prevent food waste. In this paper, the problem mentioned above was analyzed by focusing on the inhabitants of Central Europe, especially Poland, aged 20-30. This paper provides an insight into collecting data through dedicated software and an organized database. The proposed database contains information on the amount, type, and reasons for wasting food in households. A literature review supported the work to answer research questions, compare the situation in Poland with the problem analyzed in other countries, and find research gaps. The proposed article examines the cause of food waste and its quantity in detail. This review complements previous reviews by emphasizing social and economic innovation in Poland's food waste management. The paper recommends a course of action for future research on food waste management and prevention related to the handling and disposal of food, emphasizing households, i.e., the last link in the supply chain.

Keywords: food waste, food waste reduction, consumer food waste, human-food interaction

Procedia PDF Downloads 119
1351 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System

Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur

Abstract:

Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.

Keywords: avatar, dictionary, HamNoSys, hearing impaired, Indian sign language (ISL), sign language

Procedia PDF Downloads 230
1350 A Bibliometric Analysis of Ukrainian Research Articles on SARS-COV-2 (COVID-19) in Compliance with the Standards of Current Research Information Systems

Authors: Sabina Auhunas

Abstract:

These days in Ukraine, Open Science dramatically develops for the sake of scientists of all branches, providing an opportunity to take a more close look on the studies by foreign scientists, as well as to deliver their own scientific data to national and international journals. However, when it comes to the generalization of data on science activities by Ukrainian scientists, these data are often integrated into E-systems that operate inconsistent and barely related information sources. In order to resolve these issues, developed countries productively use E-systems, designed to store and manage research data, such as Current Research Information Systems that enable combining uncompiled data obtained from different sources. An algorithm for selecting SARS-CoV-2 research articles was designed, by means of which we collected the set of papers published by Ukrainian scientists and uploaded by August 1, 2020. Resulting metadata (document type, open access status, citation count, h-index, most cited documents, international research funding, author counts, the bibliographic relationship of journals) were taken from Scopus and Web of Science databases. The study also considered the info from COVID-19/SARS-CoV-2-related documents published from December 2019 to September 2020, directly from documents published by authors depending on territorial affiliation to Ukraine. These databases are enabled to get the necessary information for bibliometric analysis and necessary details: copyright, which may not be available in other databases (e.g., Science Direct). Search criteria and results for each online database were considered according to the WHO classification of the virus and the disease caused by this virus and represented (Table 1). First, we identified 89 research papers that provided us with the final data set after consolidation and removing duplication; however, only 56 papers were used for the analysis. The total number of documents by results from the WoS database came out at 21641 documents (48 affiliated to Ukraine among them) in the Scopus database came out at 32478 documents (41 affiliated to Ukraine among them). According to the publication activity of Ukrainian scientists, the following areas prevailed: Education, educational research (9 documents, 20.58%); Social Sciences, interdisciplinary (6 documents, 11.76%) and Economics (4 documents, 8.82%). The highest publication activity by institution types was reported in the Ministry of Education and Science of Ukraine (its percent of published scientific papers equals 36% or 7 documents), Danylo Halytsky Lviv National Medical University goes next (5 documents, 15%) and P. L. Shupyk National Medical Academy of Postgraduate Education (4 documents, 12%). Basically, research activities by Ukrainian scientists were funded by 5 entities: Belgian Development Cooperation, the National Institutes of Health (NIH, U.S.), The United States Department of Health & Human Services, grant from the Whitney and Betty MacMillan Center for International and Area Studies at Yale, a grant from the Yale Women Faculty Forum. Based on the results of the analysis, we obtained a set of published articles and preprints to be assessed on the variety of features in upcoming studies, including citation count, most cited documents, a bibliographic relationship of journals, reference linking. Further research on the development of the national scientific E-database continues using brand new analytical methods.

Keywords: content analysis, COVID-19, scientometrics, text mining

Procedia PDF Downloads 115
1349 Design and Optimization of a Small Hydraulic Propeller Turbine

Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink

Abstract:

A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.

Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design

Procedia PDF Downloads 150
1348 Hsa-miR-192-5p, and Hsa-miR-129-5p Prominent Biomarkers in Regulation Glioblastoma Cancer Stem Cells Genes Microenvironment

Authors: Rasha Ahmadi

Abstract:

Glioblastoma is one of the most frequent brain malignancies, having a high mortality rate and limited survival in individuals with this malignancy. Despite different treatments and surgery, recurrence of glioblastoma cancer stem cells may arise as a subsequent tumor. For this reason, it is crucial to research the markers associated with glioblastoma stem cells and specifically their microenvironment. In this study, using bioinformatics analysis, we analyzed and nominated genes in the microenvironment pathways of glioblastoma stem cells. In this study, an appropriate database was selected for analysis by referring to the GEO database. This dataset comprised gene expression patterns in stem cells derived from glioblastoma patients. Gene clusters were divided as high and low expression. Enrichment databases such as Enrichr, STRING, and GEPIA were utilized to analyze the data appropriately. Finally, we extracted the potential genes 2700 high-expression and 1100 low-expression genes are implicated in the metabolic pathways of glioblastoma cancer progression. Cellular senescence, MAPK, TNF, hypoxia, zimosterol biosynthesis, and phosphatidylinositol metabolism pathways were substantially expressed and the metabolic pathways were downregulated. After assessing the association between protein networks, MSMP, SOX2, FGD4 ,and CNTNAP3 genes with high expression and DMKN and SBSN genes with low were selected. All of these genes were observed in the survival curve, with a survival of fewer than 10 percent over around 15 months. hsa-mir-192-5p, hsa-mir-129-5p, hsa-mir-215-5p, hsa-mir-335-5p, and hsa-mir-340-5p played key function in glioblastoma cancer stem cells microenviroments. We introduced critical genes through integrated and regular bioinformatics studies by assessing the amount of gene expression profile data that can play an important role in targeting genes involved in the energy and microenvironment of glioblastoma cancer stem cells. Have. This study indicated that hsa-mir-192-5p, and hsa-mir-129-5p are appropriate candidates for this.

Keywords: Glioblastoma, Cancer Stem Cells, Biomarker Discovery, Gene Expression Profiles, Bioinformatics Analysis, Tumor Microenvironment

Procedia PDF Downloads 144
1347 Identification and Validation of Co-Dominant Markers for Selection of the CO-4 Anthracnose Disease Resistance Gene in Common Bean Cultivar G2333

Authors: Annet Namusoke, Annet Namayanja, Peter Wasswa, Shakirah Nampijja

Abstract:

Common bean cultivar G2333 which offers broad resistance for anthracnose has been widely used as a source of resistance in breeding for anthracnose resistance. The cultivar is pyramided with three genes namely CO-4, CO-5 and CO-7 and of these three genes, the CO-4 gene has been found to offer the broadest resistance. The main aim of this work was to identify and validate easily assayable PCR based co-dominant molecular markers for selection of the CO-4 gene in segregating populations derived from crosses of G2333 with RWR 1946 and RWR 2075, two commercial Andean cultivars highly susceptible to anthracnose. Marker sequences for the study were obtained by blasting the sequence of the COK-4 gene in the Phaseolus gene database. Primer sequence pairs that were not provided from the Phaseolus gene database were designed by the use of Primer3 software. PCR conditions were optimized and the PCR products were run on 6% HPAGE gel. Results of the polymorphism test indicated that out of 18 identified markers, only two markers namely BM588 and BM211 behaved co-dominantly. Phenotypic evaluation for reaction to anthracnose disease was done by inoculating 21days old seedlings of three parents, F1 and F2 populations with race 7 of Colletotrichum lindemuthianum in the humid chamber. DNA testing of the BM588 marker onto the F2 segregating population of the crosses RWR 1946 x G 2333 and RWR 2075 x G2333 further revealed that the marker BM588 co-segregated with disease resistance with co-dominance of two alleles of 200bp and 400bp, fitting the expected segregation ratio of 1:2:1. The BM588 marker was significantly associated with disease resistance and gave promising results for marker assisted selection of the CO-4 gene in the breeding lines. Activities to validate the BM211 marker are also underway.

Keywords: codominant, Colletotrichum lindemuthianum, MAS, Phaseolus vulgaris

Procedia PDF Downloads 291
1346 Methotrexate Associated Skin Cancer: A Signal Review of Pharmacovigilance Center

Authors: Abdulaziz Alakeel, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Methotrexate (MTX) is an antimetabolite used to treat multiple conditions, including neoplastic diseases, severe psoriasis, and rheumatoid arthritis. Skin cancer is the out-of-control growth of abnormal cells in the epidermis, the outermost skin layer, caused by unrepaired DNA damage that triggers mutations. These mutations lead the skin cells to multiply rapidly and form malignant tumors. The aim of this review is to evaluate the risk of skin cancer associated with the use of methotrexate and to suggest regulatory recommendations if required. Methodology: Signal Detection team at Saudi Food and Drug Authority (SFDA) performed a safety review using National Pharmacovigilance Center (NPC) database as well as the World Health Organization (WHO) VigiBase, alongside with literature screening to retrieve related information for assessing the causality between skin cancer and methotrexate. The search conducted in July 2020. Results: Four published articles support the association seen while searching in literature, a recent randomized control trial published in 2020 revealed a statistically significant increase in skin cancer among MTX users. Another study mentioned methotrexate increases the risk of non-melanoma skin cancer when used in combination with immunosuppressant and biologic agents. In addition, the incidence of melanoma for methotrexate users was 3-fold more than the general population in a cohort study of rheumatoid arthritis patients. The last article estimated the risk of cutaneous malignant melanoma (CMM) in a cohort study shows a statistically significant risk increase for CMM was observed in MTX exposed patients. The WHO database (VigiBase) searched for individual case safety reports (ICSRs) reported for “Skin Cancer” and 'Methotrexate' use, which yielded 121 ICSRs. The initial review revealed that 106 cases are insufficiently documented for proper medical assessment. However, the remaining fifteen cases have extensively evaluated by applying the WHO criteria of causality assessment. As a result, 30 percent of the cases showed that MTX could possibly cause skin cancer; five cases provide unlikely association and five un-assessable cases due to lack of information. The Saudi NPC database searched to retrieve any reported cases for the combined terms methotrexate/skin cancer; however, no local cases reported up to date. The data mining of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by the WHO Uppsala Monitoring Centre to measure the reporting ratio. Positive IC reflects higher statistical association, while negative values translated as a less statistical association, considering the null value equal to zero. Results showed that a combination of 'Methotrexate' and 'Skin cancer' observed more than expected when compared to other medications in the WHO database (IC value is 1.2). Conclusion: The weighted cumulative pieces of evidence identified from global cases, data mining, and published literature are sufficient to support a causal association between the risk of skin cancer and methotrexate. Therefore, health care professionals should be aware of this possible risk and may consider monitoring any signs or symptoms of skin cancer in patients treated with methotrexate.

Keywords: methotrexate, skin cancer, signal detection, pharmacovigilance

Procedia PDF Downloads 114
1345 Expert System: Debugging Using MD5 Process Firewall

Authors: C. U. Om Kumar, S. Kishore, A. Geetha

Abstract:

An Operating system (OS) is software that manages computer hardware and software resources by providing services to computer programs. One of the important user expectations of the operating system is to provide the practice of defending information from unauthorized access, disclosure, modification, inspection, recording or destruction. Operating system is always vulnerable to the attacks of malwares such as computer virus, worm, Trojan horse, backdoors, ransomware, spyware, adware, scareware and more. And so the anti-virus software were created for ensuring security against the prominent computer viruses by applying a dictionary based approach. The anti-virus programs are not always guaranteed to provide security against the new viruses proliferating every day. To clarify this issue and to secure the computer system, our proposed expert system concentrates on authorizing the processes as wanted and unwanted by the administrator for execution. The Expert system maintains a database which consists of hash code of the processes which are to be allowed. These hash codes are generated using MD5 message-digest algorithm which is a widely used cryptographic hash function. The administrator approves the wanted processes that are to be executed in the client in a Local Area Network by implementing Client-Server architecture and only the processes that match with the processes in the database table will be executed by which many malicious processes are restricted from infecting the operating system. The add-on advantage of this proposed Expert system is that it limits CPU usage and minimizes resource utilization. Thus data and information security is ensured by our system along with increased performance of the operating system.

Keywords: virus, worm, Trojan horse, back doors, Ransomware, Spyware, Adware, Scareware, sticky software, process table, MD5, CPU usage and resource utilization

Procedia PDF Downloads 427
1344 Transcriptomine: The Nuclear Receptor Signaling Transcriptome Database

Authors: Scott A. Ochsner, Christopher M. Watkins, Apollo McOwiti, David L. Steffen Lauren B. Becnel, Neil J. McKenna

Abstract:

Understanding signaling by nuclear receptors (NRs) requires an appreciation of their cognate ligand- and tissue-specific transcriptomes. While target gene regulation data are abundant in this field, they reside in hundreds of discrete publications in formats refractory to routine query and analysis and, accordingly, their full value to the NR signaling community has not been realized. One of the mandates of the Nuclear Receptor Signaling Atlas (NURSA) is to facilitate access of the community to existing public datasets. Pursuant to this mandate we are developing a freely-accessible community web resource, Transcriptomine, to bring together the sum total of available expression array and RNA-Seq data points generated by the field in a single location. Transcriptomine currently contains over 25,000,000 gene fold change datapoints from over 1200 contrasts relevant to over 100 NRs, ligands and coregulators in over 200 tissues and cell lines. Transcriptomine is designed to accommodate a spectrum of end users ranging from the bench researcher to those with advanced bioinformatic training. Visualization tools allow users to build custom charts to compare and contrast patterns of gene regulation across different tissues and in response to different ligands. Our resource affords an entirely new paradigm for leveraging gene expression data in the NR signaling field, empowering users to query gene fold changes across diverse regulatory molecules, tissues and cell lines, target genes, biological functions and disease associations, and that would otherwise be prohibitive in terms of time and effort. Transcriptomine will be regularly updated with gene lists from future genome-wide expression array and expression-sequencing datasets in the NR signaling field.

Keywords: target gene database, informatics, gene expression, transcriptomics

Procedia PDF Downloads 273
1343 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 131
1342 Exploring Simple Sequence Repeats within Conserved microRNA Precursors Identified from Tea Expressed Sequence Tag (EST) Database

Authors: Anjan Hazra, Nirjhar Dasgupta, Chandan Sengupta, Sauren Das

Abstract:

Tea (Camellia sinensis) has received substantial attention from the scientific world time to time, not only for its commercial importance, but also for its demand to the health-conscious people across the world for its extensive use as potential sources of antioxidant supplement. These health-benefit traits primarily rely on some regulatory networks of different metabolic pathways. Development of microsatellite markers from the conserved genomic regions is being worthwhile for studying the genetic diversity of closely related species or self-pollinated species. Although several SSR markers have been reported, in tea the trait-specific Simple Sequence Repeats (SSRs) are yet to be identified, which can be used for marker assisted breeding technique. MicroRNAs are endogenous, noncoding, short RNAs directly involved in regulating gene expressions at the post-transcriptional level. It has been found that diversity in miRNA gene interferes the formation of its characteristic hair pin structure and the subsequent function. In the present study, the precursors of small regulatory RNAs (microRNAs) has been fished out from tea Expressed Sequence Tag (EST) database. Furthermore, the simple sequence repeat motifs within the putative miRNA precursor genes are also identified in order to experimentally validate their existence and function. It is already known that genic-SSR markers are very adept and breeder-friendly source for genetic diversity analysis. So, the potential outcome of this in-silico study would provide some novel clues in understanding the miRNA-triggered polymorphic genic expression controlling specific metabolic pathways, accountable for tea quality.

Keywords: micro RNA, simple sequence repeats, tea quality, trait specific marker

Procedia PDF Downloads 311
1341 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings

Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey

Abstract:

Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.

Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing

Procedia PDF Downloads 152
1340 Research on Health Emergency Management Based on the Bibliometrics

Authors: Meng-Na Dai, Bao-Fang Wen, Gao-Pei Zhu, Chen-Xi Zhang, Jing Sun, Chang-Hai Tang, Zhi-Qiang Feng, Wen-Qiang Yin

Abstract:

Based on the analysis of literature in the health emergency management in China with recent 10 years, this paper discusses the Chinese current research hotspots, development trends and shortcomings in this field, and provides references for scholars to conduct follow-up research. CNKI(China National Knowledge Infrastructure), Weipu, and Wanfang were the databases of this literature. The key words during the database search were health, emergency, and management with the time from 2009 to 2018. The duplicate, non-academic, and unrelated documents were excluded. 901 articles were included in the literature review database. The main indicators of abstraction were, the number of articles published every year, authors, institutions, periodicals, etc. There are some research findings through the analysis of the literature. Overall, the number of literature in the health emergency management in China has shown a fluctuating downward trend in recent 10 years. Specifically, there is a lack of close cooperation between authors, which has not constituted the core team among them yet. Meanwhile, in this field, the number of high-level periodicals and quality literature is scarce. In addition, there are a lot of research hotspots, such as emergency management system, mechanism research, capacity evaluation index system research, plans and capacity-building research, etc. In the future, we should increase the scientific research funding of the health emergency management, encourage collaborative innovation among authors in multi-disciplinary fields, and create high-quality and high-impact journals in this field. The states should encourage scholars in this field to carry out more academic cooperation and communication with the whole world and improve the research in breadth and depth. Generally speaking, the research in health emergency management in China is still insufficient and needs to be improved.

Keywords: health emergency management, research situation, bibliometrics, literature

Procedia PDF Downloads 137
1339 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 136
1338 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: time-series, features engineering methods for forecasting, energy demand forecasting, Azure Machine Learning

Procedia PDF Downloads 297