Search results for: data acquisition (DAQ)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25431

Search results for: data acquisition (DAQ)

24561 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India

Authors: Anushtha Saxena

Abstract:

This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.

Keywords: data monetization, e-commerce companies, regulatory framework, GDPR

Procedia PDF Downloads 112
24560 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 195
24559 Transforming Healthcare Data Privacy: Integrating Blockchain with Zero-Knowledge Proofs and Cryptographic Security

Authors: Kenneth Harper

Abstract:

Blockchain technology presents solutions for managing healthcare data, addressing critical challenges in privacy, integrity, and access. This paper explores how privacy-preserving technologies, such as zero-knowledge proofs (ZKPs) and homomorphic encryption (HE), enhance decentralized healthcare platforms by enabling secure computations and patient data protection. An examination of the mathematical foundations of these methods, their practical applications, and how they meet the evolving demands of healthcare data security is unveiled. Using real-world examples, this research highlights industry-leading implementations and offers a roadmap for future applications in secure, decentralized healthcare ecosystems.

Keywords: blockchain, cryptography, data privacy, decentralized data management, differential privacy, healthcare, healthcare data security, homomorphic encryption, privacy-preserving technologies, secure computations, zero-knowledge proofs

Procedia PDF Downloads 12
24558 Wearable Heart Rate Sensor Based on Wireless System for Heart Health Monitoring

Authors: Murtadha Kareem, Oliver Faust

Abstract:

Wearable biosensor systems can be designed and developed for health monitoring. There is much interest in both scientific and industrial communities established since 2007. Fundamentally, the cost of healthcare has increased dramatically and the world population is aging. That creates the need to harvest technological improvements with small bio-sensing devices, wireless-communication, microelectronics and smart textiles, that leads to non-stop developments of wearable sensor based systems. There has been a significant demand to monitor patient's health status while the patient leaves the hospital in his/her personal environment. To address this need, there are numerous system prototypes which has been launched in the medical market recently, the aim of that is to provide real time information feedback about patient's health status, either to the patient himself/herself or direct to the supervising medical centre station, while being capable to give a notification for the patient in case of possible imminent health threatening conditions. Furthermore, wearable health monitoring systems comprise new techniques to address the problem of managing and monitoring chronic heart diseases for elderly people. Wearable sensor systems for health monitoring include various types of miniature sensors, either wearable or implantable. To be specific, our proposed system able to measure essential physiological parameter, such as heart rate signal which could be transmitted through Bluetooth to the cloud server in order to store, process, analysis and visualise the data acquisition. The acquired measurements are connected through internet of things to a central node, for instance an android smart phone or tablet used for visualising the collected information on application or transmit it to a medical centre.

Keywords: Wearable sensor, Heart rate, Internet of things, Chronic heart disease

Procedia PDF Downloads 160
24557 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 451
24556 Analysis of Farmer's Involvement in Public and Private Extension Services in Southwestern Nigeria

Authors: S. O. Ayansina, R. A. Oyeyinka, K. K. Bolarinwa

Abstract:

There is an increasing demand for a functional extension delivery services in Nigeria with a view to meet up with the food and fiber needs of the ever growing population of human and animal respectively. This study was designed to examine farmers’ involvement in public and private extension services in southwestern Nigeria, specifically to explore the farmers’ participation in the two types of organizations involved. It also evaluates the performances of personnel in the organizations. A multi-stage random sampling technique was used to select 30 respondents from each of the three selected organizations in Ogun, Osun and Oyo states in Southwestern Nigeria. Data was collected with interview schedule and analyzed both at descriptive and inferential levels. Kruskal Wallis one-way Analysis of variance was used to test the differences between the participation of beneficiaries who are farmers under the public and private extension services and the level of benefit accrued to them from the various extension organizations involved in the study. Results revealed that private extension organizations were performing better and were more preferred by the beneficiaries. Results of the tested hypotheses as shown by Kruskal Wallis test of difference (x2 = 0.709) S no significant difference between farmers’ participation in the extension services of public and private organizations but however showed significant difference (X2 =12.074) in the benefits achieved by respondents in the two organizations. These include: increased quantity of crop produced, farm income, skill acquisition, and improved education in private extension organizations. Based on this result, it could be inferred that beneficiaries generally preferred private extension organizations because of their effectiveness and vibrancy in programme administration. Public extension is therefore recommended for general overhauling and possibly privatization in order to cater for teeming population of farmers demanding for efficient and functional extension services to better their lots in production, processing and marketing of agricultural produce.

Keywords: public and private involvement, extension services, farmers' participation

Procedia PDF Downloads 400
24555 A Neural Network Based Clustering Approach for Imputing Multivariate Values in Big Data

Authors: S. Nickolas, Shobha K.

Abstract:

The treatment of incomplete data is an important step in the data pre-processing. Missing values creates a noisy environment in all applications and it is an unavoidable problem in big data management and analysis. Numerous techniques likes discarding rows with missing values, mean imputation, expectation maximization, neural networks with evolutionary algorithms or optimized techniques and hot deck imputation have been introduced by researchers for handling missing data. Among these, imputation techniques plays a positive role in filling missing values when it is necessary to use all records in the data and not to discard records with missing values. In this paper we propose a novel artificial neural network based clustering algorithm, Adaptive Resonance Theory-2(ART2) for imputation of missing values in mixed attribute data sets. The process of ART2 can recognize learned models fast and be adapted to new objects rapidly. It carries out model-based clustering by using competitive learning and self-steady mechanism in dynamic environment without supervision. The proposed approach not only imputes the missing values but also provides information about handling the outliers.

Keywords: ART2, data imputation, clustering, missing data, neural network, pre-processing

Procedia PDF Downloads 272
24554 The Effect That the Data Assimilation of Qinghai-Tibet Plateau Has on a Precipitation Forecast

Authors: Ruixia Liu

Abstract:

Qinghai-Tibet Plateau has an important influence on the precipitation of its lower reaches. Data from remote sensing has itself advantage and numerical prediction model which assimilates RS data will be better than other. We got the assimilation data of MHS and terrestrial and sounding from GSI, and introduced the result into WRF, then got the result of RH and precipitation forecast. We found that assimilating MHS and terrestrial and sounding made the forecast on precipitation, area and the center of the precipitation more accurate by comparing the result of 1h,6h,12h, and 24h. Analyzing the difference of the initial field, we knew that the data assimilating about Qinghai-Tibet Plateau influence its lower reaches forecast by affecting on initial temperature and RH.

Keywords: Qinghai-Tibet Plateau, precipitation, data assimilation, GSI

Procedia PDF Downloads 230
24553 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies

Authors: Sook Ching Yee, Angela Siew Hoong Lee

Abstract:

Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.

Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)

Procedia PDF Downloads 356
24552 Big Data Analysis with Rhipe

Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim

Abstract:

Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.

Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe

Procedia PDF Downloads 494
24551 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 139
24550 The Effect of Costus igneus Extract on Learning and Memory in Normal and Diabetic Rats

Authors: Shalini Adiga, Shashikant Chetty, Jisha, Shobha Kamath

Abstract:

Background: Moderate impairment of learning and memory has been observed in both type 1 and 2 diabetes mellitus in humans and experimental animals. A Change in glucose utilization and oxidative stress that occur in diabetes are considered the main reasons for cognitive dysfunction. Objective: Costus igneus (CI) which is known to possess hypoglycemic activity was evaluated in this study for its effect on learning and memory in normal and diabetic rats. Methods: Wistar rats were divided into control, CI-alcoholic extract treated normal (250 and 500mg/kg), diabetic control and CI-treated diabetic groups. CI treatment was continued for 4 weeks. For induction of diabetes, a single dose of streptozotocin was injected (30 mg/kg i.p). Entrance latency and time spent in the dark room during acquisition and at 24 and 48h after an aversive shock in a passive avoidance model was used as an index of learning and memory. Glutathione and malondialdehyde levels in brain and blood glucose were measured. Data was analysed using ANOVA. Results: During the three trials in exploration test, the diabetic control rats exhibited no significant change in entrance latency or in the total time spent in the dark compartment. During retention testing, the entrance latency of the diabetic treated groups was two times less at 24h and three times less at 48h after aversive stimulus as compared to diabetic rats. The normal drug-treated rats showed similar behaviour as the saline control. Treatment with CI significantly reduced the raised blood sugar and MDA levels of diabetic rats. Conclusion: Costus igneus prevented the cognitive dysfunction in diabetic rats which can be attributed to its antioxidant and antihyperglycemic activities.

Keywords: Costus igneous, diabetes, learning and memory, cognitive dysfunction

Procedia PDF Downloads 346
24549 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 400
24548 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 131
24547 Factors Afecting the Academic Performance of In-Service Students in Science Educaction

Authors: Foster Chilufya

Abstract:

This study sought to determine factors that affect academic performance of mature age students in Science Education at University of Zambia. It was guided by Maslow’s Hierarchy of Needs. The theory provided relationship between achievement motivation and academic performance. A descriptive research design was used. Both Qualitative and Quantitative research methods were used to collect data from 88 respondents. Simple random and purposive sampling procedures were used to collect from the respondents. Concerning factors that motivate mature-age students to choose Science Education Programs, the following were cited: need for self-actualization, acquisition of new knowledge, encouragement from friends and family members, good performance at high school and diploma level, love for the sciences, prestige and desire to be promoted at places of work. As regards factors that affected the academic performance of mature-age students, both negative and positive factors were identified. These included: demographic factors such as age and gender, psychological characteristics such as motivation and preparedness to learn, self-set goals, self esteem, ability, confidence and persistence, student prior academic performance at high school and college level, social factors, institutional factors and the outcomes of the learning process. In order to address the factors that negatively affect academic performance of mature-age students, the following measures were identified: encouraging group discussions, encouraging interactive learning process, providing a conducive learning environment, reviewing Science Education curriculum and providing adequate learning materials. Based on these factors, it is recommended that, the School of Education introduces a program in Science Education specifically for students training to be teachers of science. Additionally, introduce majors in Physics Education, Biology Education, Chemistry Education and Mathematics Education relevant to what is taught in high schools.

Keywords: academic, performance, in-service, science

Procedia PDF Downloads 309
24546 Developing Digital Skills in Museum Professionals through Digital Education: International Good Practices and Effective Learning Experiences

Authors: Antonella Poce, Deborah Seid Howes, Maria Rosaria Re, Mara Valente

Abstract:

The Creative Industries education contexts, Museum Education in particular, generally presents a low emphasis on the use of new digital technologies, digital abilities and transversal skills development. The spread of the Covid-19 pandemic has underlined the importance of these abilities and skills in cultural heritage education contexts: gaining digital skills, museum professionals will improve their career opportunities with access to new distribution markets through internet access and e-commerce, new entrepreneurial tools, or adding new forms of digital expression to their work. However, the use of web, mobile, social, and analytical tools is becoming more and more essential in the Heritage field, and museums, in particular, to face the challenges posed by the current worldwide health emergency. Recent studies highlight the need for stronger partnerships between the cultural and creative sectors, social partners and education and training providers in order to provide these sectors with the combination of skills needed for creative entrepreneurship in a rapidly changing environment. Considering the above conditions, the paper presents different examples of digital learning experiences carried out in Italian and USA contexts with the aim of promoting digital skills in museum professionals. In particular, a quali-quantitative research study has been conducted on two international Postgraduate courses, “Advanced Studies in Museum Education” (2 years) and “Museum Education” (1 year), in order to identify the educational effectiveness of the online learning strategies used (e.g., OBL, Digital Storytelling, peer evaluation) for the development of digital skills and the acquisition of specific content. More than 50 museum professionals participating in the mentioned educational pathways took part in the learning activity, providing evaluation data useful for research purposes.

Keywords: digital skills, museum professionals, technology, education

Procedia PDF Downloads 172
24545 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 16
24544 Study on the Stages of Knowledge Flow in Central Libraries of Tehran Universities by the Pattern of American Productivity & Quality Center

Authors: Amir Reza Asnafi, Ehsan Tajabadi, Mohsen Hajizeinolabedini

Abstract:

The purpose of this study is to identify the concept of knowledge flow in central libraries of Tehran universities in by the pattern of American Productivity & Quality Center (APQC). The present study is an applied and descriptive survey in terms of its purpose and the methodology used. In this study, APQC framework was used for data collection. The study population is managers and supervisors of central libraries’ departments of public universities of Tehran belonging to the Ministry of Science, Research and Technology. These libraries include: Central Libraries of Al-Zahra University, Amir Kabir, Tarbiat Modarres, Tehran, Khajeh Nasir Toosi University of Technology, Shahed, Sharif, Shahid Beheshti, Allameh Tabataba'i University, Iran University of Science and Technology. Due to the limited number of members of the community, sampling was not performed and the census was conducted instead. The study of knowledge flow in central libraries of public universities in Tehran showed that in seven dimensions of knowledge flow of APQC, these libraries are far from desirable level and to achieve the ideal point, many activities in the field of knowledge flow need to be made, therefore suggestions were made in this study to reach the desired level. One Sample t Test in this research showed that these libraries are at a poor level in terms of these factors: in the dimensions of creation, identification and use of knowledge at a medium level and in the aspects of knowledge acquisition, review, sharing and access and also Manova test or Multivariable Analyze of Variance proved that there was no significant difference between the dimensions of knowledge flow between these libraries and the status of the knowledge flow in these libraries is at the same level as well. Except for the knowledge creation aspect that is slightly different in this regard that was mentioned before.

Keywords: knowledge flow, knowledge management, APQC, Tehran’s academic university libraries

Procedia PDF Downloads 153
24543 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 103
24542 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering

Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining

Abstract:

DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.

Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)

Procedia PDF Downloads 273
24541 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 387
24540 Profit Comparative of Fisheries in East Aceh Regency Aceh Province

Authors: Mawardati Mawardati

Abstract:

This research was carried out on the traditional milkfish and shrimp culture cultivation from March to May 2018 in East Aceh District. This study aims to to analyze the differences between traditional milkfish cultivation and shrimp farming in East Aceh District, Aceh Province. The analytical method used is acquisition analysis and Independent Sample T test analysis. The results showed a significant difference between milkfish farming and shrimp farming in East Aceh District, Aceh Province. Based on the results of the analysis, the average profit from shrimp farming is higher than that of milkfish farming. This demand exceeds market demand for exports. Thus the price of shrimp is still far higher than the price of milk fish.

Keywords: comparative, profit, shrimp, milkfish

Procedia PDF Downloads 150
24539 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis

Authors: Nathainail Bashir, Neil Anderson

Abstract:

The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.

Keywords: dipole-dipole, ERT, Karst terrains, MASW

Procedia PDF Downloads 314
24538 Data Science in Military Decision-Making: A Semi-Systematic Literature Review

Authors: H. W. Meerveld, R. H. A. Lindelauf

Abstract:

In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.

Keywords: data science, decision-making, information superiority, literature review, military

Procedia PDF Downloads 157
24537 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA

Authors: Cai Qianyi

Abstract:

In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.

Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment

Procedia PDF Downloads 55
24536 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 275
24535 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach

Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar

Abstract:

Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.

Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry

Procedia PDF Downloads 311
24534 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 371
24533 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 552
24532 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 307