Search results for: electronic data interchange
24323 Sales Patterns Clustering Analysis on Seasonal Product Sales Data
Authors: Soojin Kim, Jiwon Yang, Sungzoon Cho
Abstract:
As a seasonal product is only in demand for a short time, inventory management is critical to profits. Both markdowns and stockouts decrease the return on perishable products; therefore, researchers have been interested in the distribution of seasonal products with the aim of maximizing profits. In this study, we propose a data-driven seasonal product sales pattern analysis method for individual retail outlets based on observed sales data clustering; the proposed method helps in determining distribution strategies.Keywords: clustering, distribution, sales pattern, seasonal product
Procedia PDF Downloads 59724322 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 49324321 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 36424320 Evaluating the Effectiveness of Science Teacher Training Programme in National Colleges of Education: a Preliminary Study, Perceptions of Prospective Teachers
Authors: A. S. V Polgampala, F. Huang
Abstract:
This is an overview of what is entailed in an evaluation and issues to be aware of when class observation is being done. This study examined the effects of evaluating teaching practice of a 7-day ‘block teaching’ session in a pre -service science teacher training program at a reputed National College of Education in Sri Lanka. Effects were assessed in three areas: evaluation of the training process, evaluation of the training impact, and evaluation of the training procedure. Data for this study were collected by class observation of 18 teachers during 9th February to 16th of 2017. Prospective teachers of science teaching, the participants of the study were evaluated based on newly introduced format by the NIE. The data collected was analyzed qualitatively using the Miles and Huberman procedure for analyzing qualitative data: data reduction, data display and conclusion drawing/verification. It was observed that the trainees showed their confidence in teaching those competencies and skills. Teacher educators’ dissatisfaction has been a great impact on evaluation process.Keywords: evaluation, perceptions & perspectives, pre-service, science teachering
Procedia PDF Downloads 31524319 Detecting Venomous Files in IDS Using an Approach Based on Data Mining Algorithm
Authors: Sukhleen Kaur
Abstract:
In security groundwork, Intrusion Detection System (IDS) has become an important component. The IDS has received increasing attention in recent years. IDS is one of the effective way to detect different kinds of attacks and malicious codes in a network and help us to secure the network. Data mining techniques can be implemented to IDS, which analyses the large amount of data and gives better results. Data mining can contribute to improving intrusion detection by adding a level of focus to anomaly detection. So far the study has been carried out on finding the attacks but this paper detects the malicious files. Some intruders do not attack directly, but they hide some harmful code inside the files or may corrupt those file and attack the system. These files are detected according to some defined parameters which will form two lists of files as normal files and harmful files. After that data mining will be performed. In this paper a hybrid classifier has been used via Naive Bayes and Ripper classification methods. The results show how the uploaded file in the database will be tested against the parameters and then it is characterised as either normal or harmful file and after that the mining is performed. Moreover, when a user tries to mine on harmful file it will generate an exception that mining cannot be made on corrupted or harmful files.Keywords: data mining, association, classification, clustering, decision tree, intrusion detection system, misuse detection, anomaly detection, naive Bayes, ripper
Procedia PDF Downloads 41424318 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 29724317 Blockchain Platform Configuration for MyData Operator in Digital and Connected Health
Authors: Minna Pikkarainen, Yueqiang Xu
Abstract:
The integration of digital technology with existing healthcare processes has been painfully slow, a huge gap exists between the fields of strictly regulated official medical care and the quickly moving field of health and wellness technology. We claim that the promises of preventive healthcare can only be fulfilled when this gap is closed – health care and self-care becomes seamless continuum “correct information, in the correct hands, at the correct time allowing individuals and professionals to make better decisions” what we call connected health approach. Currently, the issues related to security, privacy, consumer consent and data sharing are hindering the implementation of this new paradigm of healthcare. This could be solved by following MyData principles stating that: Individuals should have the right and practical means to manage their data and privacy. MyData infrastructure enables decentralized management of personal data, improves interoperability, makes it easier for companies to comply with tightening data protection regulations, and allows individuals to change service providers without proprietary data lock-ins. This paper tackles today’s unprecedented challenges of enabling and stimulating multiple healthcare data providers and stakeholders to have more active participation in the digital health ecosystem. First, the paper systematically proposes the MyData approach for healthcare and preventive health data ecosystem. In this research, the work is targeted for health and wellness ecosystems. Each ecosystem consists of key actors, such as 1) individual (citizen or professional controlling/using the services) i.e. data subject, 2) services providing personal data (e.g. startups providing data collection apps or data collection devices), 3) health and wellness services utilizing aforementioned data and 4) services authorizing the access to this data under individual’s provided explicit consent. Second, the research extends the existing four archetypes of orchestrator-driven healthcare data business models for the healthcare industry and proposes the fifth type of healthcare data model, the MyData Blockchain Platform. This new architecture is developed by the Action Design Research approach, which is a prominent research methodology in the information system domain. The key novelty of the paper is to expand the health data value chain architecture and design from centralization and pseudo-decentralization to full decentralization, enabled by blockchain, thus the MyData blockchain platform. The study not only broadens the healthcare informatics literature but also contributes to the theoretical development of digital healthcare and blockchain research domains with a systemic approach.Keywords: blockchain, health data, platform, action design
Procedia PDF Downloads 10024316 Quantum Dot Biosensing for Advancing Precision Cancer Detection
Authors: Sourav Sarkar, Manashjit Gogoi
Abstract:
In the evolving landscape of cancer diagnostics, optical biosensing has emerged as a promising tool due to its sensitivity and specificity. This study explores the potential of CdS/ZnS core-shell quantum dots (QDs) capped with 3-Mercaptopropionic acid (3-MPA), which aids in the linking chemistry of QDs to various cancer antibodies. The QDs, with their unique optical and electronic properties, have been integrated into the biosensor design. Their high quantum yield and size-dependent emission spectra have been exploited to improve the sensor’s detection capabilities. The study presents the design of this QD-enhanced optical biosensor. The use of these QDs can also aid multiplexed detection, enabling simultaneous monitoring of different cancer biomarkers. This innovative approach holds significant potential for advancing cancer diagnostics, contributing to timely and accurate detection. Future work will focus on optimizing the biosensor design for clinical applications and exploring the potential of QDs in other biosensing applications. This study underscores the potential of integrating nanotechnology and biosensing for cancer research, paving the way for next-generation diagnostic tools. It is a step forward in our quest for achieving precision oncology.Keywords: quantum dots, biosensing, cancer, device
Procedia PDF Downloads 5624315 A Recognition Method of Ancient Yi Script Based on Deep Learning
Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma
Abstract:
Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.Keywords: recognition, CNN, Yi character, divergence
Procedia PDF Downloads 16524314 Using Learning Apps in the Classroom
Authors: Janet C. Read
Abstract:
UClan set collaboration with Lingokids to assess the Lingokids learning app's impact on learning outcomes in classrooms in the UK for children with ages ranging from 3 to 5 years. Data gathered during the controlled study with 69 children includes attitudinal data, engagement, and learning scores. Data shows that children enjoyment while learning was higher among those children using the game-based app compared to those children using other traditional methods. It’s worth pointing out that engagement when using the learning app was significantly higher than other traditional methods among older children. According to existing literature, there is a direct correlation between engagement, motivation, and learning. Therefore, this study provides relevant data points to conclude that Lingokids learning app serves its purpose of encouraging learning through playful and interactive content. That being said, we believe that learning outcomes should be assessed with a wider range of methods in further studies. Likewise, it would be beneficial to assess the level of usability and playability of the app in order to evaluate the learning app from other angles.Keywords: learning app, learning outcomes, rapid test activity, Smileyometer, early childhood education, innovative pedagogy
Procedia PDF Downloads 7124313 Road Safety in the Great Britain: An Exploratory Data Analysis
Authors: Jatin Kumar Choudhary, Naren Rayala, Abbas Eslami Kiasari, Fahimeh Jafari
Abstract:
The Great Britain has one of the safest road networks in the world. However, the consequences of any death or serious injury are devastating for loved ones, as well as for those who help the severely injured. This paper aims to analyse the Great Britain's road safety situation and show the response measures for areas where the total damage caused by accidents can be significantly and quickly reduced. In this paper, we do an exploratory data analysis using STATS19 data. For the past 30 years, the UK has had a good record in reducing fatalities. The UK ranked third based on the number of road deaths per million inhabitants. There were around 165,000 accidents reported in the Great Britain in 2009 and it has been decreasing every year until 2019 which is under 120,000. The government continues to scale back road deaths empowering responsible road users by identifying and prosecuting the parameters that make the roads less safe.Keywords: road safety, data analysis, openstreetmap, feature expanding.
Procedia PDF Downloads 14024312 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 22724311 Efficacy of Single-Dose Azithromycin Therapy for the Treatment of Chlamydia trachomatis in Patients Evaluated for Child Sexual Abuse in an Urban Health Center 2006-16
Authors: Trenton Hubbard, Kenneth Soyemi, Emily Siffermann
Abstract:
Introduction: According to the American Academy of Pediatrics (AAP) there are different weight-based recommendations for the treatment of Chlamydia trachomatis (CT) in patients who are being evaluated for sexual assault. Current AAP Red Book guidelines recommend that uncomplicated C. trachomatis anogenital infection in prepubertal patients weighing less than =<45 kg be treated with oral erythromycin 50 mg/kg/day QID for 14 days with no alternative therapies, and for patients whose weight => 45 kg are Azithromycin 1 gm PO once. Our study objective was to determine the efficacy of single-dose Azithromycin therapy for the treatment of Chlamydia trachomatis in patients weighing less than 50 kg who were evaluated for child sexual abuse in an urban setting. Methods: We conducted a retrospective chart review of historical medical records (paper and electronic) patients weighing less than 50 kg who were evaluated for child sexual abuse and subsequently treated for C. trachomatis infection with Azithromycin (20 mg/kg PO once up to a maximum 1 gm) and received a Test of Cure (TOC) from 2006-2016. Qualitative variables were expressed as percentages. Quantitative variables were expressed as mean values (+/- standard deviation [SD]) if they followed a normal distribution or as median values (interquartile range[IQR]) if they did not. Wilcoxson two-sample test was used to compare means of Azithromycin Dose, mg/kg, and TOC timing between treatment responders and non-responders. Results: We reviewed records of 34 patients, average age (SD) was 5.4 (2.0) years, 33 (97%) were treated for CT and 1(3%) for both GC and CT. 25 (74%) were females. Urine PCR was the most commonly used test at evaluation and as TOC with 13 (38%) patients completing both tests. The average (SD) dose of Azithromycin at treatment was 470 (136) mg and average (SD) mg/kg dose of 20 (1.9) mg/kg for all patients. Median (IQR) timing for TOC testing was 19 (14-26) days. Of the 33 with complete data 25 (74%) had a negative TOC. When compared with treatment non-responders (TOC failures), treatment responders received higher doses (average dose (SD) received 495 (139) vs 401(110), P 0.06)); similar average (SD) weight base dosing received (20.8(2.0) vs 19.7 (1.5), P 0.15)), and earlier average (SD)TOC test timing (18.8 (5.6) vs 32 (28.6) P 0.02)). Conclusion: Azithromycin dosing appears to be efficacious in the treatment of CT post sexual assault as majority of patients responded. Although treatment responders and non-responders received similar weight based doses, there is need for additional studies to understand variances and predictors of response.Keywords: child sexual abuse, chlmaydia trachmotis infection, single-dose azithromycin, weight less than or equal to 45 kilograms
Procedia PDF Downloads 29424310 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 39624309 Highly Stretchable, Intelligent and Conductive PEDOT/PU Nanofibers Based on Electrospinning and in situ Polymerization
Authors: Kun Qi, Yuman Zhou, Jianxin He
Abstract:
A facile fabrication strategy via electrospinning and followed by in situ polymerization to fabricate a highly stretchable and conductive Poly(3,4-ethylenedioxythiophene)/Polyurethane (PEDOT/PU) nanofibrous membrane is reported. PU nanofibers were prepared by electrospinning and then PEDOT was coated on the plasma modified PU nanofiber surface via in-situ polymerization to form flexible PEDOT/PU composite nanofibers with conductivity. The results show PEDOT is successfully synthesized on the surface of PU nanofiber and PEDOT/PU composite nanofibers possess skin-core structure. Furthermore, the experiments indicate the optimal technological parameters of the polymerization process are as follow: The concentration of EDOT monomers is 50 mmol/L, the polymerization time is 24 h and the temperature is 25℃. The PEDOT/PU nanofibers exhibit excellent electrical conductivity ( 27.4 S/cm). In addition, flexible sensor made from conductive PEDOT/PU nanofibers shows highly sensitive response towards tensile strain and also can be used to detect finger motion. The results demonstrate promising application of the as-obtained nanofibrous membrane in flexible wearable electronic fields.Keywords: electrospinning, polyurethane, PEDOT, conductive nanofiber, flexible senor
Procedia PDF Downloads 35924308 Evidence Based Medicine: Going beyond Improving Physicians Viewpoints, Usage and Challenges Upcoming
Authors: Peyman Rezaei Hachesu, Vahideh Zareh Gavgani, Zahra Salahzadeh
Abstract:
To survey the attitudes, awareness, and practice of Evidence Based Medicine (EBM), and to determine the barriers that influence apply’ EBM in therapeutic process among clinical residents in Iran.We conducted a cross sectional survey during September to December 2012 at the teaching hospitals of Tehran University of Medical Sciences among 79 clinical residents from different medical specialties. A valid and reliable questionnaire consisted of five sections and 27 statements were used in this research. We applied Spearman and Mann Whitney test for correlation between variables. Findings showed that the knowledge of residents about EBM is low. Their attitude towards EBM was positive but their knowledge and skills in regard with the evidence based medical information resources were mostly limited to PubMed and Google scholar. The main barrier was the lack of enough time to practicing EBM. There was no significant correlation between residency grade and familiarity and use of electronic EBM resources (Spearman, P = 0.138). Integration of training approaches like journal clubs or workshops with clinical practice is suggested.Keywords: evidence-based medicine, clinical residents, decision-making, attitude, questionnaire
Procedia PDF Downloads 37624307 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services
Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme
Abstract:
Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing
Procedia PDF Downloads 11324306 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 23224305 The Use of Artificial Intelligence in the Prevention of Micro and Macrovascular Complications in Type Diabetic Patients in Low and Middle-Income Countries
Authors: Ebere Ellison Obisike, Justina N. Adalikwu-Obisike
Abstract:
Artificial intelligence (AI) is progressively transforming health and social care. With the rapid invention of various electronic devices, machine learning, and computing systems, the use of AI istraversing many health and social care practices. In this systematic review of journal and grey literature, this study explores how the applications of AI might promote the prevention of micro and macrovascular complications in type 1 diabetic patients. This review focuses on the use of a digitized blood glucose meter and the application of insulin pumps for the effective management of type 1 diabetes in low and middle-income countries. It is projected that the applications of AI may assist individuals with type 1 diabetes to monitor and control their blood glucose level and prevent the early onset of micro and macrovascular complications.Keywords: artificial intelligence, blood glucose meter, insulin pump, low and middle-income countries, micro and macrovascular complications, type 1 diabetes
Procedia PDF Downloads 19624304 Model Predictive Controller for Pasteurization Process
Authors: Tesfaye Alamirew Dessie
Abstract:
Our study focuses on developing a Model Predictive Controller (MPC) and evaluating it against a traditional PID for a pasteurization process. Utilizing system identification from the experimental data, the dynamics of the pasteurization process were calculated. Using best fit with data validation, residual, and stability analysis, the quality of several model architectures was evaluated. The validation data fit the auto-regressive with exogenous input (ARX322) model of the pasteurization process by roughly 80.37 percent. The ARX322 model structure was used to create MPC and PID control techniques. After comparing controller performance based on settling time, overshoot percentage, and stability analysis, it was found that MPC controllers outperform PID for those parameters.Keywords: MPC, PID, ARX, pasteurization
Procedia PDF Downloads 16324303 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data
Authors: Rana Rimawi, Ayman Baklizi
Abstract:
Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation
Procedia PDF Downloads 19824302 Mobile Wireless Investigation Platform
Authors: Dimitar Karastoyanov, Todor Penchev
Abstract:
The paper presents the research of a kind of autonomous mobile robots, intended for work and adaptive perception in unknown and unstructured environment. The objective are robots, dedicated for multi-sensory environment perception and exploration, like measurements and samples taking, discovering and putting a mark on the objects as well as environment interactions–transportation, carrying in and out of equipment and objects. At that ground classification of the different types mobile robots in accordance with the way of locomotion (wheel- or chain-driven, walking, etc.), used drive mechanisms, kind of sensors, end effectors, area of application, etc. is made. Modular system for the mechanical construction of the mobile robots is proposed. Special PLC on the base of AtMega128 processor for robot control is developed. Electronic modules for the wireless communication on the base of Jennic processor as well as the specific software are developed. The methods, means and algorithms for adaptive environment behaviour and tasks realization are examined. The methods of group control of mobile robots and for suspicious objects detecting and handling are discussed too.Keywords: mobile robots, wireless communications, environment investigations, group control, suspicious objects
Procedia PDF Downloads 35624301 Cybervetting and Online Privacy in Job Recruitment – Perspectives on the Current and Future Legislative Framework Within the EU
Authors: Nicole Christiansen, Hanne Marie Motzfeldt
Abstract:
In recent years, more and more HR professionals have been using cyber-vetting in job recruitment in an effort to find the perfect match for the company. These practices are growing rapidly, accessing a vast amount of data from social networks, some of which is privileged and protected information. Thus, there is a risk that the right to privacy is becoming a duty to manage your private data. This paper investigates to which degree a job applicant's fundamental rights are protected adequately in current and future legislation in the EU. This paper argues that current data protection regulations and forthcoming regulations on the use of AI ensure sufficient protection. However, even though the regulation on paper protects employees within the EU, the recruitment sector may not pay sufficient attention to the regulation as it not specifically targeting this area. Therefore, the lack of specific labor and employment regulation is a concern that the social partners should attend to.Keywords: AI, cyber vetting, data protection, job recruitment, online privacy
Procedia PDF Downloads 8624300 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 40124299 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique
Authors: Reda Abdel Azim, Tariq Shehab
Abstract:
The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension
Procedia PDF Downloads 25524298 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia
Authors: Tim Nedyalkov
Abstract:
A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.Keywords: cloud compliance, cloud security, data governance, privacy protection
Procedia PDF Downloads 11624297 Simulations to Predict Solar Energy Potential by ERA5 Application at North Africa
Authors: U. Ali Rahoma, Nabil Esawy, Fawzia Ibrahim Moursy, A. H. Hassan, Samy A. Khalil, Ashraf S. Khamees
Abstract:
The design of any solar energy conversion system requires the knowledge of solar radiation data obtained over a long period. Satellite data has been widely used to estimate solar energy where no ground observation of solar radiation is available, yet there are limitations on the temporal coverage of satellite data. Reanalysis is a “retrospective analysis” of the atmosphere parameters generated by assimilating observation data from various sources, including ground observation, satellites, ships, and aircraft observation with the output of NWP (Numerical Weather Prediction) models, to develop an exhaustive record of weather and climate parameters. The evaluation of the performance of reanalysis datasets (ERA-5) for North Africa against high-quality surface measured data was performed using statistical analysis. The estimation of global solar radiation (GSR) distribution over six different selected locations in North Africa during ten years from the period time 2011 to 2020. The root means square error (RMSE), mean bias error (MBE) and mean absolute error (MAE) of reanalysis data of solar radiation range from 0.079 to 0.222, 0.0145 to 0.198, and 0.055 to 0.178, respectively. The seasonal statistical analysis was performed to study seasonal variation of performance of datasets, which reveals the significant variation of errors in different seasons—the performance of the dataset changes by changing the temporal resolution of the data used for comparison. The monthly mean values of data show better performance, but the accuracy of data is compromised. The solar radiation data of ERA-5 is used for preliminary solar resource assessment and power estimation. The correlation coefficient (R2) varies from 0.93 to 99% for the different selected sites in North Africa in the present research. The goal of this research is to give a good representation for global solar radiation to help in solar energy application in all fields, and this can be done by using gridded data from European Centre for Medium-Range Weather Forecasts ECMWF and producing a new model to give a good result.Keywords: solar energy, solar radiation, ERA-5, potential energy
Procedia PDF Downloads 21124296 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data
Authors: Fan Gao, Lior Pachter
Abstract:
The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome
Procedia PDF Downloads 15524295 Meta Mask Correction for Nuclei Segmentation in Histopathological Image
Authors: Jiangbo Shi, Zeyu Gao, Chen Li
Abstract:
Nuclei segmentation is a fundamental task in digital pathology analysis and can be automated by deep learning-based methods. However, the development of such an automated method requires a large amount of data with precisely annotated masks which is hard to obtain. Training with weakly labeled data is a popular solution for reducing the workload of annotation. In this paper, we propose a novel meta-learning-based nuclei segmentation method which follows the label correction paradigm to leverage data with noisy masks. Specifically, we design a fully conventional meta-model that can correct noisy masks by using a small amount of clean meta-data. Then the corrected masks are used to supervise the training of the segmentation model. Meanwhile, a bi-level optimization method is adopted to alternately update the parameters of the main segmentation model and the meta-model. Extensive experimental results on two nuclear segmentation datasets show that our method achieves the state-of-the-art result. In particular, in some noise scenarios, it even exceeds the performance of training on supervised data.Keywords: deep learning, histopathological image, meta-learning, nuclei segmentation, weak annotations
Procedia PDF Downloads 14024294 Characterization of the Worn Surfaces of Brake Discs and Friction Materials after Dynobench Tests
Authors: Ana Paula Gomes Nogueira, Pietro Tonolini, Andrea Bonfanti
Abstract:
Automotive braking systems must convert kinetic into thermal energy by friction. Nowadays, the disc brake system is the most widespread configuration on the automotive market, which its specific configuration provides a very efficient heat dissipation. At the same time, both discs and pads wear out. Different wear mechanisms can act during the braking, which makes the understanding of the phenomenon essential for the strategies to be applied when an increased lifetime of the components is required. In this study, a specific characterization approach was conducted to analyze the worn surfaces of commercial pad friction materials and its conterface cast iron disc after dynobench tests. Scanning electronic microscope (SEM), confocal microscope, and focus ion beam microscope (FIB) were used as the main tools of the analysis, and they allowed imaging of the footprint of the different wear mechanisms presenting on the worn surfaces. Aspects such as the temperature and specific ingredients of the pad friction materials are discussed since they play an important role in the wear mechanisms.Keywords: wear mechanism, surface characterization, brake tests, friction materials, disc brake
Procedia PDF Downloads 53