Search results for: startup data analytics
24588 Exploring the Capabilities of Sentinel-1A and Sentinel-2A Data for Landslide Mapping
Authors: Ismayanti Magfirah, Sartohadi Junun, Samodra Guruh
Abstract:
Landslides are one of the most frequent and devastating natural disasters in Indonesia. Many studies have been conducted regarding this phenomenon. However, there is a lack of attention in the landslide inventory mapping. The natural condition (dense forest area) and the limited human and economic resources are some of the major problems in building landslide inventory in Indonesia. Considering the importance of landslide inventory data in susceptibility, hazard, and risk analysis, it is essential to generate landslide inventory based on available resources. In order to achieve this, the first thing we have to do is identify the landslides' location. The presence of Sentinel-1A and Sentinel-2A data gives new insights into land monitoring investigation. The free access, high spatial resolution, and short revisit time, make the data become one of the most trending open sources data used in landslide mapping. Sentinel-1A and Sentinel-2A data have been used broadly for landslide detection and landuse/landcover mapping. This study aims to generate landslide map by integrating Sentinel-1A and Sentinel-2A data use change detection method. The result will be validated by field investigation to make preliminary landslide inventory in the study area.Keywords: change detection method, landslide inventory mapping, Sentinel-1A, Sentinel-2A
Procedia PDF Downloads 17224587 A DEA Model in a Multi-Objective Optimization with Fuzzy Environment
Authors: Michael Gidey Gebru
Abstract:
Most DEA models operate in a static environment with input and output parameters that are chosen by deterministic data. However, due to ambiguity brought on shifting market conditions, input and output data are not always precisely gathered in real-world scenarios. Fuzzy numbers can be used to address this kind of ambiguity in input and output data. Therefore, this work aims to expand crisp DEA into DEA with fuzzy environment. In this study, the input and output data are regarded as fuzzy triangular numbers. Then, the DEA model with fuzzy environment is solved using a multi-objective method to gauge the Decision Making Units’ efficiency. Finally, the developed DEA model is illustrated with an application on real data 50 educational institutions.Keywords: efficiency, DEA, fuzzy, decision making units, higher education institutions
Procedia PDF Downloads 5324586 Data-Driven Decision Making: Justification of Not Leaving Class without It
Authors: Denise Hexom, Judith Menoher
Abstract:
Teachers and administrators across America are being asked to use data and hard evidence to inform practice as they begin the task of implementing Common Core State Standards. Yet, the courses they are taking in schools of education are not preparing teachers or principals to understand the data-driven decision making (DDDM) process nor to utilize data in a much more sophisticated fashion. DDDM has been around for quite some time, however, it has only recently become systematically and consistently applied in the field of education. This paper discusses the theoretical framework of DDDM; empirical evidence supporting the effectiveness of DDDM; a process a department in a school of education has utilized to implement DDDM; and recommendations to other schools of education who attempt to implement DDDM in their decision-making processes and in their students’ coursework.Keywords: data-driven decision making, institute of higher education, special education, continuous improvement
Procedia PDF Downloads 38824585 Quantile Coherence Analysis: Application to Precipitation Data
Authors: Yaeji Lim, Hee-Seok Oh
Abstract:
The coherence analysis measures the linear time-invariant relationship between two data sets and has been studied various fields such as signal processing, engineering, and medical science. However classical coherence analysis tends to be sensitive to outliers and focuses only on mean relationship. In this paper, we generalized cross periodogram to quantile cross periodogram and provide richer inter-relationship between two data sets. This is a general version of Laplace cross periodogram. We prove its asymptotic distribution under the long range process and compare them with ordinary coherence through numerical examples. We also present real data example to confirm the usefulness of quantile coherence analysis.Keywords: coherence, cross periodogram, spectrum, quantile
Procedia PDF Downloads 39224584 Conception of a Predictive Maintenance System for Forest Harvesters from Multiple Data Sources
Authors: Lazlo Fauth, Andreas Ligocki
Abstract:
For cost-effective use of harvesters, expensive repairs and unplanned downtimes must be reduced as far as possible. The predictive detection of failing systems and the calculation of intelligent service intervals, necessary to avoid these factors, require in-depth knowledge of the machines' behavior. Such know-how needs permanent monitoring of the machine state from different technical perspectives. In this paper, three approaches will be presented as they are currently pursued in the publicly funded project PreForst at Ostfalia University of Applied Sciences. These include the intelligent linking of workshop and service data, sensors on the harvester, and a special online hydraulic oil condition monitoring system. Furthermore the paper shows potentials as well as challenges for the use of these data in the conception of a predictive maintenance system.Keywords: predictive maintenance, condition monitoring, forest harvesting, forest engineering, oil data, hydraulic data
Procedia PDF Downloads 14724583 Sampled-Data Control for Fuel Cell Systems
Authors: H. Y. Jung, Ju H. Park, S. M. Lee
Abstract:
A sampled-data controller is presented for solid oxide fuel cell systems which is expressed by a sector bounded nonlinear model. The sector bounded nonlinear systems, which have a feedback connection with a linear dynamical system and nonlinearity satisfying certain sector type constraints. Also, the sampled-data control scheme is very useful since it is possible to handle digital controller and increasing research efforts have been devoted to sampled-data control systems with the development of modern high-speed computers. The proposed control law is obtained by solving a convex problem satisfying several linear matrix inequalities. Simulation results are given to show the effectiveness of the proposed design method.Keywords: sampled-data control, fuel cell, linear matrix inequalities, nonlinear control
Procedia PDF Downloads 56624582 How Western Donors Allocate Official Development Assistance: New Evidence From a Natural Language Processing Approach
Authors: Daniel Benson, Yundan Gong, Hannah Kirk
Abstract:
Advancement in national language processing techniques has led to increased data processing speeds, and reduced the need for cumbersome, manual data processing that is often required when processing data from multilateral organizations for specific purposes. As such, using named entity recognition (NER) modeling and the Organisation of Economically Developed Countries (OECD) Creditor Reporting System database, we present the first geotagged dataset of OECD donor Official Development Assistance (ODA) projects on a global, subnational basis. Our resulting data contains 52,086 ODA projects geocoded to subnational locations across 115 countries, worth a combined $87.9bn. This represents the first global, OECD donor ODA project database with geocoded projects. We use this new data to revisit old questions of how ‘well’ donors allocate ODA to the developing world. This understanding is imperative for policymakers seeking to improve ODA effectiveness.Keywords: international aid, geocoding, subnational data, natural language processing, machine learning
Procedia PDF Downloads 7924581 Analysis of Pangasinan State University: Bayambang Students’ Concerns Through Social Media Analytics and Latent Dirichlet Allocation Topic Modelling Approach
Authors: Matthew John F. Sino Cruz, Sarah Jane M. Ferrer, Janice C. Francisco
Abstract:
COVID-19 pandemic has affected more than 114 countries all over the world since it was considered a global health concern in 2020. Different sectors, including education, have shifted to remote/distant setups to follow the guidelines set to prevent the spread of the disease. One of the higher education institutes which shifted to remote setup is the Pangasinan State University (PSU). In order to continue providing quality instructions to the students, PSU designed Flexible Learning Model to still provide services to its stakeholders amidst the pandemic. The model covers the redesigning of delivering instructions in remote setup and the technology needed to support these adjustments. The primary goal of this study is to determine the insights of the PSU – Bayambang students towards the remote setup implemented during the pandemic and how they perceived the initiatives employed in relation to their experiences in flexible learning. In this study, the topic modelling approach was implemented using Latent Dirichlet Allocation. The dataset used in the study. The results show that the most common concern of the students includes time and resource management, poor internet connection issues, and difficulty coping with the flexible learning modality. Furthermore, the findings of the study can be used as one of the bases for the administration to review and improve the policies and initiatives implemented during the pandemic in relation to remote service delivery. In addition, further studies can be conducted to determine the overall sentiment of the other stakeholders in the policies implemented at the University.Keywords: COVID-19, topic modelling, students’ sentiment, flexible learning, Latent Dirichlet allocation
Procedia PDF Downloads 12224580 Compressed Suffix Arrays to Self-Indexes Based on Partitioned Elias-Fano
Abstract:
A practical and simple self-indexing data structure, Partitioned Elias-Fano (PEF) - Compressed Suffix Arrays (CSA), is built in linear time for the CSA based on PEF indexes. Moreover, the PEF-CSA is compared with two classical compressed indexing methods, Ferragina and Manzini implementation (FMI) and Sad-CSA on different type and size files in Pizza & Chili. The PEF-CSA performs better on the existing data in terms of the compression ratio, count, and locates time except for the evenly distributed data such as proteins data. The observations of the experiments are that the distribution of the φ is more important than the alphabet size on the compression ratio. Unevenly distributed data φ makes better compression effect, and the larger the size of the hit counts, the longer the count and locate time.Keywords: compressed suffix array, self-indexing, partitioned Elias-Fano, PEF-CSA
Procedia PDF Downloads 25324579 Data, Digital Identity and Antitrust Law: An Exploratory Study of Facebook’s Novi Digital Wallet
Authors: Wanjiku Karanja
Abstract:
Facebook has monopoly power in the social networking market. It has grown and entrenched its monopoly power through the capture of its users’ data value chains. However, antitrust law’s consumer welfare roots have prevented it from effectively addressing the role of data capture in Facebook’s market dominance. These regulatory blind spots are augmented in Facebook’s proposed Diem cryptocurrency project and its Novi Digital wallet. Novi, which is Diem’s digital identity component, shall enable Facebook to collect an unprecedented volume of consumer data. Consequently, Novi has seismic implications on internet identity as the network effects of Facebook’s large user base could establish it as the de facto internet identity layer. Moreover, the large tracts of data Facebook shall collect through Novi shall further entrench Facebook's market power. As such, the attendant lock-in effects of this project shall be very difficult to reverse. Urgent regulatory action is therefore required to prevent this expansion of Facebook’s data resources and monopoly power. This research thus highlights the importance of data capture to competition and market health in the social networking industry. It utilizes interviews with key experts to empirically interrogate the impact of Facebook’s data capture and control of its users’ data value chains on its market power. This inquiry is contextualized against Novi’s expansive effect on Facebook’s data value chains. It thus addresses the novel antitrust issues arising at the nexus of Facebook’s monopoly power and the privacy of its users’ data. It also explores the impact of platform design principles, specifically data portability and data portability, in mitigating Facebook’s anti-competitive practices. As such, this study finds that Facebook is a powerful monopoly that dominates the social media industry to the detriment of potential competitors. Facebook derives its power from its size, annexure of the consumer data value chain, and control of its users’ social graphs. Additionally, the platform design principles of data interoperability and data portability are not a panacea to restoring competition in the social networking market. Their success depends on the establishment of robust technical standards and regulatory frameworks.Keywords: antitrust law, data protection law, data portability, data interoperability, digital identity, Facebook
Procedia PDF Downloads 12324578 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 20424577 Data Quality Enhancement with String Length Distribution
Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda
Abstract:
Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.Keywords: string classification, data quality, feature selection, probability distribution, string length
Procedia PDF Downloads 31924576 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation
Procedia PDF Downloads 37424575 Determining Abnomal Behaviors in UAV Robots for Trajectory Control in Teleoperation
Authors: Kiwon Yeom
Abstract:
Change points are abrupt variations in a data sequence. Detection of change points is useful in modeling, analyzing, and predicting time series in application areas such as robotics and teleoperation. In this paper, a change point is defined to be a discontinuity in one of its derivatives. This paper presents a reliable method for detecting discontinuities within a three-dimensional trajectory data. The problem of determining one or more discontinuities is considered in regular and irregular trajectory data from teleoperation. We examine the geometric detection algorithm and illustrate the use of the method on real data examples.Keywords: change point, discontinuity, teleoperation, abrupt variation
Procedia PDF Downloads 16824574 Multidimensional Item Response Theory Models for Practical Application in Large Tests Designed to Measure Multiple Constructs
Authors: Maria Fernanda Ordoñez Martinez, Alvaro Mauricio Montenegro
Abstract:
This work presents a statistical methodology for measuring and founding constructs in Latent Semantic Analysis. This approach uses the qualities of Factor Analysis in binary data with interpretations present on Item Response Theory. More precisely, we propose initially reducing dimensionality with specific use of Principal Component Analysis for the linguistic data and then, producing axes of groups made from a clustering analysis of the semantic data. This approach allows the user to give meaning to previous clusters and found the real latent structure presented by data. The methodology is applied in a set of real semantic data presenting impressive results for the coherence, speed and precision.Keywords: semantic analysis, factorial analysis, dimension reduction, penalized logistic regression
Procedia PDF Downloads 44424573 Analysis of Production Forecasting in Unconventional Gas Resources Development Using Machine Learning and Data-Driven Approach
Authors: Dongkwon Han, Sangho Kim, Sunil Kwon
Abstract:
Unconventional gas resources have dramatically changed the future energy landscape. Unlike conventional gas resources, the key challenges in unconventional gas have been the requirement that applies to advanced approaches for production forecasting due to uncertainty and complexity of fluid flow. In this study, artificial neural network (ANN) model which integrates machine learning and data-driven approach was developed to predict productivity in shale gas. The database of 129 wells of Eagle Ford shale basin used for testing and training of the ANN model. The Input data related to hydraulic fracturing, well completion and productivity of shale gas were selected and the output data is a cumulative production. The performance of the ANN using all data sets, clustering and variables importance (VI) models were compared in the mean absolute percentage error (MAPE). ANN model using all data sets, clustering, and VI were obtained as 44.22%, 10.08% (cluster 1), 5.26% (cluster 2), 6.35%(cluster 3), and 32.23% (ANN VI), 23.19% (SVM VI), respectively. The results showed that the pre-trained ANN model provides more accurate results than the ANN model using all data sets.Keywords: unconventional gas, artificial neural network, machine learning, clustering, variables importance
Procedia PDF Downloads 19624572 Dissimilarity-Based Coloring for Symbolic and Multivariate Data Visualization
Authors: K. Umbleja, M. Ichino, H. Yaguchi
Abstract:
In this paper, we propose a coloring method for multivariate data visualization by using parallel coordinates based on dissimilarity and tree structure information gathered during hierarchical clustering. The proposed method is an extension for proximity-based coloring that suffers from a few undesired side effects if hierarchical tree structure is not balanced tree. We describe the algorithm by assigning colors based on dissimilarity information, show the application of proposed method on three commonly used datasets, and compare the results with proximity-based coloring. We found our proposed method to be especially beneficial for symbolic data visualization where many individual objects have already been aggregated into a single symbolic object.Keywords: data visualization, dissimilarity-based coloring, proximity-based coloring, symbolic data
Procedia PDF Downloads 17124571 The Impact of Data Science on Geography: A Review
Authors: Roberto Machado
Abstract:
We conducted a systematic review using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology, analyzing 2,996 studies and synthesizing 41 of them to explore the evolution of data science and its integration into geography. By employing optimization algorithms, we accelerated the review process, significantly enhancing the efficiency and precision of literature selection. Our findings indicate that data science has developed over five decades, facing challenges such as the diversified integration of data and the need for advanced statistical and computational skills. In geography, the integration of data science underscores the importance of interdisciplinary collaboration and methodological innovation. Techniques like large-scale spatial data analysis and predictive algorithms show promise in natural disaster management and transportation route optimization, enabling faster and more effective responses. These advancements highlight the transformative potential of data science in geography, providing tools and methodologies to address complex spatial problems. The relevance of this study lies in the use of optimization algorithms in systematic reviews and the demonstrated need for deeper integration of data science into geography. Key contributions include identifying specific challenges in combining diverse spatial data and the necessity for advanced computational skills. Examples of connections between these two fields encompass significant improvements in natural disaster management and transportation efficiency, promoting more effective and sustainable environmental solutions with a positive societal impact.Keywords: data science, geography, systematic review, optimization algorithms, supervised learning
Procedia PDF Downloads 3424570 Developing Structured Sizing Systems for Manufacturing Ready-Made Garments of Indian Females Using Decision Tree-Based Data Mining
Authors: Hina Kausher, Sangita Srivastava
Abstract:
In India, there is a lack of standard, systematic sizing approach for producing readymade garments. Garments manufacturing companies use their own created size tables by modifying international sizing charts of ready-made garments. The purpose of this study is to tabulate the anthropometric data which covers the variety of figure proportions in both height and girth. 3,000 data has been collected by an anthropometric survey undertaken over females between the ages of 16 to 80 years from some states of India to produce the sizing system suitable for clothing manufacture and retailing. This data is used for the statistical analysis of body measurements, the formulation of sizing systems and body measurements tables. Factor analysis technique is used to filter the control body dimensions from a large number of variables. Decision tree-based data mining is used to cluster the data. The standard and structured sizing system can facilitate pattern grading and garment production. Moreover, it can exceed buying ratios and upgrade size allocations to retail segments.Keywords: anthropometric data, data mining, decision tree, garments manufacturing, sizing systems, ready-made garments
Procedia PDF Downloads 13524569 A Framework on Data and Remote Sensing for Humanitarian Logistics
Authors: Vishnu Nagendra, Marten Van Der Veen, Stefania Giodini
Abstract:
Effective humanitarian logistics operations are a cornerstone in the success of disaster relief operations. However, for effectiveness, they need to be demand driven and supported by adequate data for prioritization. Without this data operations are carried out in an ad hoc manner and eventually become chaotic. The current availability of geospatial data helps in creating models for predictive damage and vulnerability assessment, which can be of great advantage to logisticians to gain an understanding on the nature and extent of the disaster damage. This translates into actionable information on the demand for relief goods, the state of the transport infrastructure and subsequently the priority areas for relief delivery. However, due to the unpredictable nature of disasters, the accuracy in the models need improvement which can be done using remote sensing data from UAVs (Unmanned Aerial Vehicles) or satellite imagery, which again come with certain limitations. This research addresses the need for a framework to combine data from different sources to support humanitarian logistic operations and prediction models. The focus is on developing a workflow to combine data from satellites and UAVs post a disaster strike. A three-step approach is followed: first, the data requirements for logistics activities are made explicit, which is done by carrying out semi-structured interviews with on field logistics workers. Second, the limitations in current data collection tools are analyzed to develop workaround solutions by following a systems design approach. Third, the data requirements and the developed workaround solutions are fit together towards a coherent workflow. The outcome of this research will provide a new method for logisticians to have immediately accurate and reliable data to support data-driven decision making.Keywords: unmanned aerial vehicles, damage prediction models, remote sensing, data driven decision making
Procedia PDF Downloads 38024568 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis
Authors: Abeer A. Aljohani
Abstract:
COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network
Procedia PDF Downloads 9424567 Facility Data Model as Integration and Interoperability Platform
Authors: Nikola Tomasevic, Marko Batic, Sanja Vranes
Abstract:
Emerging Semantic Web technologies can be seen as the next step in evolution of the intelligent facility management systems. Particularly, this considers increased usage of open source and/or standardized concepts for data classification and semantic interpretation. To deliver such facility management systems, providing the comprehensive integration and interoperability platform in from of the facility data model is a prerequisite. In this paper, one of the possible modelling approaches to provide such integrative facility data model which was based on the ontology modelling concept was presented. Complete ontology development process, starting from the input data acquisition, ontology concepts definition and finally ontology concepts population, was described. At the beginning, the core facility ontology was developed representing the generic facility infrastructure comprised of the common facility concepts relevant from the facility management perspective. To develop the data model of a specific facility infrastructure, first extension and then population of the core facility ontology was performed. For the development of the full-blown facility data models, Malpensa and Fiumicino airports in Italy, two major European air-traffic hubs, were chosen as a test-bed platform. Furthermore, the way how these ontology models supported the integration and interoperability of the overall airport energy management system was analyzed as well.Keywords: airport ontology, energy management, facility data model, ontology modeling
Procedia PDF Downloads 45024566 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 10624565 A Relational Data Base for Radiation Therapy
Authors: Raffaele Danilo Esposito, Domingo Planes Meseguer, Maria Del Pilar Dorado Rodriguez
Abstract:
As far as we know, it is still unavailable a commercial solution which would allow to manage, openly and configurable up to user needs, the huge amount of data generated in a modern Radiation Oncology Department. Currently, available information management systems are mainly focused on Record & Verify and clinical data, and only to a small extent on physical data. Thus, results in a partial and limited use of the actually available information. In the present work we describe the implementation at our department of a centralized information management system based on a web server. Our system manages both information generated during patient planning and treatment, and information of general interest for the whole department (i.e. treatment protocols, quality assurance protocols etc.). Our objective it to be able to analyze in a simple and efficient way all the available data and thus to obtain quantitative evaluations of our treatments. This would allow us to improve our work flow and protocols. To this end we have implemented a relational data base which would allow us to use in a practical and efficient way all the available information. As always we only use license free software.Keywords: information management system, radiation oncology, medical physics, free software
Procedia PDF Downloads 24224564 A Study of Safety of Data Storage Devices of Graduate Students at Suan Sunandha Rajabhat University
Authors: Komol Phaisarn, Natcha Wattanaprapa
Abstract:
This research is a survey research with an objective to study the safety of data storage devices of graduate students of academic year 2013, Suan Sunandha Rajabhat University. Data were collected by questionnaire on the safety of data storage devices according to CIA principle. A sample size of 81 was drawn from population by purposive sampling method. The results show that most of the graduate students of academic year 2013 at Suan Sunandha Rajabhat University use handy drive to store their data and the safety level of the devices is at good level.Keywords: security, safety, storage devices, graduate students
Procedia PDF Downloads 35324563 Simulation of a Cost Model Response Requests for Replication in Data Grid Environment
Authors: Kaddi Mohammed, A. Benatiallah, D. Benatiallah
Abstract:
Data grid is a technology that has full emergence of new challenges, such as the heterogeneity and availability of various resources and geographically distributed, fast data access, minimizing latency and fault tolerance. Researchers interested in this technology address the problems of the various systems related to the industry such as task scheduling, load balancing and replication. The latter is an effective solution to achieve good performance in terms of data access and grid resources and better availability of data cost. In a system with duplication, a coherence protocol is used to impose some degree of synchronization between the various copies and impose some order on updates. In this project, we present an approach for placing replicas to minimize the cost of response of requests to read or write, and we implement our model in a simulation environment. The placement techniques are based on a cost model which depends on several factors, such as bandwidth, data size and storage nodes.Keywords: response time, query, consistency, bandwidth, storage capacity, CERN
Procedia PDF Downloads 27324562 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 4324561 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece
Authors: N. Samarinas, C. Evangelides, C. Vrekos
Abstract:
The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.Keywords: classification, fuzzy logic, tolerance relations, rainfall data
Procedia PDF Downloads 31524560 Customer Satisfaction and Effective HRM Policies: Customer and Employee Satisfaction
Authors: S. Anastasiou, C. Nathanailides
Abstract:
The purpose of this study is to examine the possible link between employee and customer satisfaction. The service provided by employees, help to build a good relationship with customers and can help at increasing their loyalty. Published data for job satisfaction and indicators of customer services were gathered from relevant published works which included data from five different countries. The reviewed data indicate a significant correlation between indicators of customer and employee satisfaction in the Banking sector. There was a significant correlation between the two parameters (Pearson correlation R2=0.52 P<0.05) The reviewed data provide evidence that there is some practical evidence which links these two parameters.Keywords: job satisfaction, job performance, customer’ service, banks, human resources management
Procedia PDF Downloads 32324559 Evaluation of Australian Open Banking Regulation: Balancing Customer Data Privacy and Innovation
Authors: Suman Podder
Abstract:
As Australian ‘Open Banking’ allows customers to share their financial data with accredited Third-Party Providers (‘TPPs’), it is necessary to evaluate whether the regulators have achieved the balance between protecting customer data privacy and promoting data-related innovation. Recognising the need to increase customers’ influence on their own data, and the benefits of data-related innovation, the Australian Government introduced ‘Consumer Data Right’ (‘CDR’) to the banking sector through Open Banking regulation. Under Open Banking, TPPs can access customers’ banking data that allows the TPPs to tailor their products and services to meet customer needs at a more competitive price. This facilitated access and use of customer data will promote innovation by providing opportunities for new products and business models to emerge and grow. However, the success of Open Banking depends on the willingness of the customers to share their data, so the regulators have augmented the protection of data by introducing new privacy safeguards to instill confidence and trust in the system. The dilemma in policymaking is that, on the one hand, lenient data privacy laws will help the flow of information, but at the risk of individuals’ loss of privacy, on the other hand, stringent laws that adequately protect privacy may dissuade innovation. Using theoretical and doctrinal methods, this paper examines whether the privacy safeguards under Open Banking will add to the compliance burden of the participating financial institutions, resulting in the undesirable effect of stifling other policy objectives such as innovation. The contribution of this research is three-fold. In the emerging field of customer data sharing, this research is one of the few academic studies on the objectives and impact of Open Banking in the Australian context. Additionally, Open Banking is still in the early stages of implementation, so this research traces the evolution of Open Banking through policy debates regarding the desirability of customer data-sharing. Finally, the research focuses not only on the customers’ data privacy and juxtaposes it with another important objective of promoting innovation, but it also highlights the critical issues facing the data-sharing regime. This paper argues that while it is challenging to develop a regulatory framework for protecting data privacy without impeding innovation and jeopardising yet unknown opportunities, data privacy and innovation promote different aspects of customer welfare. This paper concludes that if a regulation is appropriately designed and implemented, the benefits of data-sharing will outweigh the cost of compliance with the CDR.Keywords: consumer data right, innovation, open banking, privacy safeguards
Procedia PDF Downloads 141