Search results for: data privacy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25434

Search results for: data privacy

24564 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 161
24563 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps

Authors: Butta Singh

Abstract:

This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.

Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram

Procedia PDF Downloads 198
24562 Detection Efficient Enterprises via Data Envelopment Analysis

Authors: S. Turkan

Abstract:

In this paper, the Turkey’s Top 500 Industrial Enterprises data in 2014 were analyzed by data envelopment analysis. Data envelopment analysis is used to detect efficient decision-making units such as universities, hospitals, schools etc. by using inputs and outputs. The decision-making units in this study are enterprises. To detect efficient enterprises, some financial ratios are determined as inputs and outputs. For this reason, financial indicators related to productivity of enterprises are considered. The efficient foreign weighted owned capital enterprises are detected via super efficiency model. According to the results, it is said that Mercedes-Benz is the most efficient foreign weighted owned capital enterprise in Turkey.

Keywords: data envelopment analysis, super efficiency, logistic regression, financial ratios

Procedia PDF Downloads 326
24561 Intelligent Process Data Mining for Monitoring for Fault-Free Operation of Industrial Processes

Authors: Hyun-Woo Cho

Abstract:

The real-time fault monitoring and diagnosis of large scale production processes is helpful and necessary in order to operate industrial process safely and efficiently producing good final product quality. Unusual and abnormal events of the process may have a serious impact on the process such as malfunctions or breakdowns. This work try to utilize process measurement data obtained in an on-line basis for the safe and some fault-free operation of industrial processes. To this end, this work evaluated the proposed intelligent process data monitoring framework based on a simulation process. The monitoring scheme extracts the fault pattern in the reduced space for the reliable data representation. Moreover, this work shows the results of using linear and nonlinear techniques for the monitoring purpose. It has shown that the nonlinear technique produced more reliable monitoring results and outperforms linear methods. The adoption of the qualitative monitoring model helps to reduce the sensitivity of the fault pattern to noise.

Keywords: process data, data mining, process operation, real-time monitoring

Procedia PDF Downloads 640
24560 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory

Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad

Abstract:

Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.

Keywords: GAN, long short-term memory, synthetic data generation, traffic management

Procedia PDF Downloads 29
24559 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.

Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning

Procedia PDF Downloads 214
24558 Analysis of Cyber Activities of Potential Business Customers Using Neo4j Graph Databases

Authors: Suglo Tohari Luri

Abstract:

Data analysis is an important aspect of business performance. With the application of artificial intelligence within databases, selecting a suitable database engine for an application design is also very crucial for business data analysis. The application of business intelligence (BI) software into some relational databases such as Neo4j has proved highly effective in terms of customer data analysis. Yet what remains of great concern is the fact that not all business organizations have the neo4j business intelligence software applications to implement for customer data analysis. Further, those with the BI software lack personnel with the requisite expertise to use it effectively with the neo4j database. The purpose of this research is to demonstrate how the Neo4j program code alone can be applied for the analysis of e-commerce website customer visits. As the neo4j database engine is optimized for handling and managing data relationships with the capability of building high performance and scalable systems to handle connected data nodes, it will ensure that business owners who advertise their products at websites using neo4j as a database are able to determine the number of visitors so as to know which products are visited at routine intervals for the necessary decision making. It will also help in knowing the best customer segments in relation to specific goods so as to place more emphasis on their advertisement on the said websites.

Keywords: data, engine, intelligence, customer, neo4j, database

Procedia PDF Downloads 194
24557 Decision Making System for Clinical Datasets

Authors: P. Bharathiraja

Abstract:

Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.

Keywords: decision making, data mining, normalization, fuzzy rule, classification

Procedia PDF Downloads 518
24556 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models

Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales

Abstract:

The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.

Keywords: concrete bridges, deterioration, Markov chains, probability matrix

Procedia PDF Downloads 337
24555 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity

Authors: Maayan Nakash

Abstract:

The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.

Keywords: workplace, future of work, generation Z, digital natives, human resources management

Procedia PDF Downloads 53
24554 Validation of Visibility Data from Road Weather Information Systems by Comparing Three Data Resources: Case Study in Ohio

Authors: Fan Ye

Abstract:

Adverse weather conditions, particularly those with low visibility, are critical to the driving tasks. However, the direct relationship between visibility distances and traffic flow/roadway safety is uncertain due to the limitation of visibility data availability. The recent growth of deployment of Road Weather Information Systems (RWIS) makes segment-specific visibility information available which can be integrated with other Intelligent Transportation System, such as automated warning system and variable speed limit, to improve mobility and safety. Before applying the RWIS visibility measurements in traffic study and operations, it is critical to validate the data. Therefore, an attempt was made in the paper to examine the validity and viability of RWIS visibility data by comparing visibility measurements among RWIS, airport weather stations, and weather information recorded by police in crash reports, based on Ohio data. The results indicated that RWIS visibility measurements were significantly different from airport visibility data in Ohio, but no conclusion regarding the reliability of RWIS visibility could be drawn in the consideration of no verified ground truth in the comparisons. It was suggested that more objective methods are needed to validate the RWIS visibility measurements, such as continuous in-field measurements associated with various weather events using calibrated visibility sensors.

Keywords: RWIS, visibility distance, low visibility, adverse weather

Procedia PDF Downloads 252
24553 Design and Simulation of All Optical Fiber to the Home Network

Authors: Rahul Malhotra

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT

Procedia PDF Downloads 558
24552 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm

Authors: Vahid Bayrami Rad

Abstract:

In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.

Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability

Procedia PDF Downloads 67
24551 The Posthuman Condition and a Translational Ethics of Entanglement

Authors: Shabnam Naderi

Abstract:

Traditional understandings of ethics considered translators, translations, technologies and other agents as separate and prioritized human agents. In fact, ethics was equated with morality. This disengaged understanding of ethics is superseded by an ethics of relation/entanglement in the posthuman philosophy. According to this ethics of entanglement, human and nonhuman agents are in constant ‘intra-action’. The human is not separate from nature, from technology and from other nonhuman entities, and an ethics of translation in this regard cannot be separated from technology and ecology and get defined merely within the realm of human-human encounter. As such, a posthuman ethics offers opportunities for change and responds to the changing nature of reality, it is negotiable and reveals itself as a moment-by-moment practice (i.e. as temporally emergent and beyond determinacy and permanence). Far from the linguistic or cultural, or individual concerns, posthuman translational ethics discusses how the former rigid norms and laws are challenged in a process ontology which puts emphasis on activity and activation and considers ethics as surfacing in activity, not as a predefined set of rules and values. In this sense, traditional ethical principles like faithfulness, accuracy and representation are superseded by principles of privacy, sustainability, multiplicity and decentralization. The present conceptual study, drawing on Ferrando’s philosophical posthumanism (as a post-humanism, as a post-dualism and as a post-anthropocentrism), Deleuze-Guattarian philosophy of immanence and Barad’s physics-philosophy strives to destabilize traditional understandings of translation ethics and bring an ethics that has loose ends and revolves around multiplicity and decentralization into the picture.

Keywords: ethics of entanglement, post-anthropocentrism, post-dualism, post-humanism, translation

Procedia PDF Downloads 77
24550 Wage Differentiation Patterns of Households Revisited for Turkey in Same Industry Employment: A Pseudo-Panel Approach

Authors: Yasin Kutuk, Bengi Yanik Ilhan

Abstract:

Previous studies investigate the wage differentiations among regions in Turkey between couples who work in the same industry and those who work in different industries by using the models that is appropriate for cross sectional data. However, since there is no available panel data for this investigation in Turkey, pseudo panels using repeated cross-section data sets of the Household Labor Force Surveys 2004-2014 are employed in order to open a new way to examine wage differentiation patterns. For this purpose, household heads are separated into groups with respect to their household composition. These groups’ membership is assumed to be fixed over time such as age groups, education, gender, and NUTS1 (12 regions) Level. The average behavior of them can be tracked overtime same as in the panel data. Estimates using the pseudo panel data would be consistent with the estimates using genuine panel data on individuals if samples are representative of the population which has fixed composition, characteristics. With controlling the socioeconomic factors, wage differentiation of household income is affected by social, cultural and economic changes after global economic crisis emerged in US. It is also revealed whether wage differentiation is changing among the birth cohorts.

Keywords: wage income, same industry, pseudo panel, panel data econometrics

Procedia PDF Downloads 399
24549 A New Approach for Improving Accuracy of Multi Label Stream Data

Authors: Kunal Shah, Swati Patel

Abstract:

Many real world problems involve data which can be considered as multi-label data streams. Efficient methods exist for multi-label classification in non streaming scenarios. However, learning in evolving streaming scenarios is more challenging, as the learners must be able to adapt to change using limited time and memory. Classification is used to predict class of unseen instance as accurate as possible. Multi label classification is a variant of single label classification where set of labels associated with single instance. Multi label classification is used by modern applications, such as text classification, functional genomics, image classification, music categorization etc. This paper introduces the task of multi-label classification, methods for multi-label classification and evolution measure for multi-label classification. Also, comparative analysis of multi label classification methods on the basis of theoretical study, and then on the basis of simulation was done on various data sets.

Keywords: binary relevance, concept drift, data stream mining, MLSC, multiple window with buffer

Procedia PDF Downloads 586
24548 Secure Cryptographic Operations on SIM Card for Mobile Financial Services

Authors: Kerem Ok, Serafettin Senturk, Serdar Aktas, Cem Cevikbas

Abstract:

Mobile technology is very popular nowadays and it provides a digital world where users can experience many value-added services. Service Providers are also eager to offer diverse value-added services to users such as digital identity, mobile financial services and so on. In this context, the security of data storage in smartphones and the security of communication between the smartphone and service provider are critical for the success of these services. In order to provide the required security functions, the SIM card is one acceptable alternative. Since SIM cards include a Secure Element, they are able to store sensitive data, create cryptographically secure keys, encrypt and decrypt data. In this paper, we design and implement a SIM and a smartphone framework that uses a SIM card for secure key generation, key storage, data encryption, data decryption and digital signing for mobile financial services. Our frameworks show that the SIM card can be used as a controlled Secure Element to provide required security functions for popular e-services such as mobile financial services.

Keywords: SIM card, mobile financial services, cryptography, secure data storage

Procedia PDF Downloads 312
24547 Synthetic Data-Driven Prediction Using GANs and LSTMs for Smart Traffic Management

Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad

Abstract:

Smart cities and intelligent transportation systems rely heavily on effective traffic management and infrastructure planning. This research tackles the data scarcity challenge by generating realistically synthetic traffic data from the PeMS-Bay dataset, enhancing predictive modeling accuracy and reliability. Advanced techniques like TimeGAN and GaussianCopula are utilized to create synthetic data that mimics the statistical and structural characteristics of real-world traffic. The future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is anticipated to capture both spatial and temporal correlations, further improving data quality and realism. Each synthetic data generation model's performance is evaluated against real-world data to identify the most effective models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are employed to model and predict complex temporal dependencies within traffic patterns. This holistic approach aims to identify areas with low vehicle counts, reveal underlying traffic issues, and guide targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study facilitates data-driven decision-making that improves urban mobility, safety, and the overall efficiency of city planning initiatives.

Keywords: GAN, long short-term memory (LSTM), synthetic data generation, traffic management

Procedia PDF Downloads 15
24546 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: machine learning, imbalanced data, data mining, big data

Procedia PDF Downloads 132
24545 Automatic Detection of Traffic Stop Locations Using GPS Data

Authors: Areej Salaymeh, Loren Schwiebert, Stephen Remias, Jonathan Waddell

Abstract:

Extracting information from new data sources has emerged as a crucial task in many traffic planning processes, such as identifying traffic patterns, route planning, traffic forecasting, and locating infrastructure improvements. Given the advanced technologies used to collect Global Positioning System (GPS) data from dedicated GPS devices, GPS equipped phones, and navigation tools, intelligent data analysis methodologies are necessary to mine this raw data. In this research, an automatic detection framework is proposed to help identify and classify the locations of stopped GPS waypoints into two main categories: signalized intersections or highway congestion. The Delaunay triangulation is used to perform this assessment in the clustering phase. While most of the existing clustering algorithms need assumptions about the data distribution, the effectiveness of the Delaunay triangulation relies on triangulating geographical data points without such assumptions. Our proposed method starts by cleaning noise from the data and normalizing it. Next, the framework will identify stoppage points by calculating the traveled distance. The last step is to use clustering to form groups of waypoints for signalized traffic and highway congestion. Next, a binary classifier was applied to find distinguish highway congestion from signalized stop points. The binary classifier uses the length of the cluster to find congestion. The proposed framework shows high accuracy for identifying the stop positions and congestion points in around 99.2% of trials. We show that it is possible, using limited GPS data, to distinguish with high accuracy.

Keywords: Delaunay triangulation, clustering, intelligent transportation systems, GPS data

Procedia PDF Downloads 276
24544 Audit and Assurance Program for AI-Based Technologies

Authors: Beatrice Arthur

Abstract:

The rapid development of artificial intelligence (AI) has transformed various industries, enabling faster and more accurate decision-making processes. However, with these advancements come increased risks, including data privacy issues, systemic biases, and challenges related to transparency and accountability. As AI technologies become more integrated into business processes, there is a growing need for comprehensive auditing and assurance frameworks to manage these risks and ensure ethical use. This paper provides a literature review on AI auditing and assurance programs, highlighting the importance of adapting traditional audit methodologies to the complexities of AI-driven systems. Objective: The objective of this review is to explore current AI audit practices and their role in mitigating risks, ensuring accountability, and fostering trust in AI systems. The study aims to provide a structured framework for developing audit programs tailored to AI technologies while also investigating how AI impacts governance, risk management, and regulatory compliance in various sectors. Methodology: This research synthesizes findings from academic publications and industry reports from 2014 to 2024, focusing on the intersection of AI technologies and IT assurance practices. The study employs a qualitative review of existing audit methodologies and frameworks, particularly the COBIT 2019 framework, to understand how audit processes can be aligned with AI governance and compliance standards. The review also considers real-time auditing as an emerging necessity for influencing AI system design during early development stages. Outcomes: Preliminary findings indicate that while AI auditing is still in its infancy, it is rapidly gaining traction as both a risk management strategy and a potential driver of business innovation. Auditors are increasingly being called upon to develop controls that address the ethical and operational risks posed by AI systems. The study highlights the need for continuous monitoring and adaptable audit techniques to handle the dynamic nature of AI technologies. Future Directions: Future research will explore the development of AI-specific audit tools and real-time auditing capabilities that can keep pace with evolving technologies. There is also a need for cross-industry collaboration to establish universal standards for AI auditing, particularly in high-risk sectors like healthcare and finance. Further work will involve engaging with industry practitioners and policymakers to refine the proposed governance and audit frameworks. Funding/Support Acknowledgements: This research is supported by the Information Systems Assurance Management Program at Concordia University of Edmonton.

Keywords: AI auditing, assurance, risk management, governance, COBIT 2019, transparency, accountability, machine learning, compliance

Procedia PDF Downloads 26
24543 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: mobile computing, deep learning apps, sensitive information, static analysis

Procedia PDF Downloads 180
24542 Analysis of Sediment Distribution around Karang Sela Coral Reef Using Multibeam Backscatter

Authors: Razak Zakariya, Fazliana Mustajap, Lenny Sharinee Sakai

Abstract:

A sediment map is quite important in the marine environment. The sediment itself contains thousands of information that can be used for other research. This study was conducted by using a multibeam echo sounder Reson T20 on 15 August 2020 at the Karang Sela (coral reef area) at Pulau Bidong. The study aims to identify the sediment type around the coral reef by using bathymetry and backscatter data. The sediment in the study area was collected as ground truthing data to verify the classification of the seabed. A dry sieving method was used to analyze the sediment sample by using a sieve shaker. PDS 2000 software was used for data acquisition, and Qimera QPS version 2.4.5 was used for processing the bathymetry data. Meanwhile, FMGT QPS version 7.10 processes the backscatter data. Then, backscatter data were analyzed by using the maximum likelihood classification tool in ArcGIS version 10.8 software. The result identified three types of sediments around the coral which were very coarse sand, coarse sand, and medium sand.

Keywords: sediment type, MBES echo sounder, backscatter, ArcGIS

Procedia PDF Downloads 87
24541 A Named Data Networking Stack for Contiki-NG-OS

Authors: Sedat Bilgili, Alper K. Demir

Abstract:

The current Internet has become the dominant use with continuing growth in the home, medical, health, smart cities and industrial automation applications. Internet of Things (IoT) is an emerging technology to enable such applications in our lives. Moreover, Named Data Networking (NDN) is also emerging as a Future Internet architecture where it fits the communication needs of IoT networks. The aim of this study is to provide an NDN protocol stack implementation running on the Contiki operating system (OS). Contiki OS is an OS that is developed for constrained IoT devices. In this study, an NDN protocol stack that can work on top of IEEE 802.15.4 link and physical layers have been developed and presented.

Keywords: internet of things (IoT), named-data, named data networking (NDN), operating system

Procedia PDF Downloads 172
24540 Investigating Data Normalization Techniques in Swarm Intelligence Forecasting for Energy Commodity Spot Price

Authors: Yuhanis Yusof, Zuriani Mustaffa, Siti Sakira Kamaruddin

Abstract:

Data mining is a fundamental technique in identifying patterns from large data sets. The extracted facts and patterns contribute in various domains such as marketing, forecasting, and medical. Prior to that, data are consolidated so that the resulting mining process may be more efficient. This study investigates the effect of different data normalization techniques, which are Min-max, Z-score, and decimal scaling, on Swarm-based forecasting models. Recent swarm intelligence algorithms employed includes the Grey Wolf Optimizer (GWO) and Artificial Bee Colony (ABC). Forecasting models are later developed to predict the daily spot price of crude oil and gasoline. Results showed that GWO works better with Z-score normalization technique while ABC produces better accuracy with the Min-Max. Nevertheless, the GWO is more superior that ABC as its model generates the highest accuracy for both crude oil and gasoline price. Such a result indicates that GWO is a promising competitor in the family of swarm intelligence algorithms.

Keywords: artificial bee colony, data normalization, forecasting, Grey Wolf optimizer

Procedia PDF Downloads 478
24539 Collision Theory Based Sentiment Detection Using Discourse Analysis in Hadoop

Authors: Anuta Mukherjee, Saswati Mukherjee

Abstract:

Data is growing everyday. Social networking sites such as Twitter are becoming an integral part of our daily lives, contributing a large increase in the growth of data. It is a rich source especially for sentiment detection or mining since people often express honest opinion through tweets. However, although sentiment analysis is a well-researched topic in text, this analysis using Twitter data poses additional challenges since these are unstructured data with abbreviations and without a strict grammatical correctness. We have employed collision theory to achieve sentiment analysis in Twitter data. We have also incorporated discourse analysis in the collision theory based model to detect accurate sentiment from tweets. We have also used the retweet field to assign weights to certain tweets and obtained the overall weightage of a topic provided in the form of a query. Hadoop has been exploited for speed. Our experiments show effective results.

Keywords: sentiment analysis, twitter, collision theory, discourse analysis

Procedia PDF Downloads 535
24538 Advances in Mathematical Sciences: Unveiling the Power of Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid advancements in data collection, storage, and processing capabilities have led to an explosion of data in various domains. In this era of big data, mathematical sciences play a crucial role in uncovering valuable insights and driving informed decision-making through data analytics. The purpose of this abstract is to present the latest advances in mathematical sciences and their application in harnessing the power of data analytics. This abstract highlights the interdisciplinary nature of data analytics, showcasing how mathematics intersects with statistics, computer science, and other related fields to develop cutting-edge methodologies. It explores key mathematical techniques such as optimization, mathematical modeling, network analysis, and computational algorithms that underpin effective data analysis and interpretation. The abstract emphasizes the role of mathematical sciences in addressing real-world challenges across different sectors, including finance, healthcare, engineering, social sciences, and beyond. It showcases how mathematical models and statistical methods extract meaningful insights from complex datasets, facilitating evidence-based decision-making and driving innovation. Furthermore, the abstract emphasizes the importance of collaboration and knowledge exchange among researchers, practitioners, and industry professionals. It recognizes the value of interdisciplinary collaborations and the need to bridge the gap between academia and industry to ensure the practical application of mathematical advancements in data analytics. The abstract highlights the significance of ongoing research in mathematical sciences and its impact on data analytics. It emphasizes the need for continued exploration and innovation in mathematical methodologies to tackle emerging challenges in the era of big data and digital transformation. In summary, this abstract sheds light on the advances in mathematical sciences and their pivotal role in unveiling the power of data analytics. It calls for interdisciplinary collaboration, knowledge exchange, and ongoing research to further unlock the potential of mathematical methodologies in addressing complex problems and driving data-driven decision-making in various domains.

Keywords: mathematical sciences, data analytics, advances, unveiling

Procedia PDF Downloads 95
24537 A Formal Approach for Instructional Design Integrated with Data Visualization for Learning Analytics

Authors: Douglas A. Menezes, Isabel D. Nunes, Ulrich Schiel

Abstract:

Most Virtual Learning Environments do not provide support mechanisms for the integrated planning, construction and follow-up of Instructional Design supported by Learning Analytic results. The present work aims to present an authoring tool that will be responsible for constructing the structure of an Instructional Design (ID), without the data being altered during the execution of the course. The visual interface aims to present the critical situations present in this ID, serving as a support tool for the course follow-up and possible improvements, which can be made during its execution or in the planning of a new edition of this course. The model for the ID is based on High-Level Petri Nets and the visualization forms are determined by the specific kind of the data generated by an e-course, a population of students generating sequentially dependent data.

Keywords: educational data visualization, high-level petri nets, instructional design, learning analytics

Procedia PDF Downloads 244
24536 Analysis of Users’ Behavior on Book Loan Log Based on Association Rule Mining

Authors: Kanyarat Bussaban, Kunyanuth Kularbphettong

Abstract:

This research aims to create a model for analysis of student behavior using Library resources based on data mining technique in case of Suan Sunandha Rajabhat University. The model was created under association rules, apriori algorithm. The results were found 14 rules and the rules were tested with testing data set and it showed that the ability of classify data was 79.24 percent and the MSE was 22.91. The results showed that the user’s behavior model by using association rule technique can use to manage the library resources.

Keywords: behavior, data mining technique, a priori algorithm, knowledge discovery

Procedia PDF Downloads 405
24535 Exploration of RFID in Healthcare: A Data Mining Approach

Authors: Shilpa Balan

Abstract:

Radio Frequency Identification, also popularly known as RFID is used to automatically identify and track tags attached to items. This study focuses on the application of RFID in healthcare. The adoption of RFID in healthcare is a crucial technology to patient safety and inventory management. Data from RFID tags are used to identify the locations of patients and inventory in real time. Medical errors are thought to be a prominent cause of loss of life and injury. The major advantage of RFID application in healthcare industry is the reduction of medical errors. The healthcare industry has generated huge amounts of data. By discovering patterns and trends within the data, big data analytics can help improve patient care and lower healthcare costs. The number of increasing research publications leading to innovations in RFID applications shows the importance of this technology. This study explores the current state of research of RFID in healthcare using a text mining approach. No study has been performed yet on examining the current state of RFID research in healthcare using a data mining approach. In this study, related articles were collected on RFID from healthcare journal and news articles. Articles collected were from the year 2000 to 2015. Significant keywords on the topic of focus are identified and analyzed using open source data analytics software such as Rapid Miner. These analytical tools help extract pertinent information from massive volumes of data. It is seen that the main benefits of adopting RFID technology in healthcare include tracking medicines and equipment, upholding patient safety, and security improvement. The real-time tracking features of RFID allows for enhanced supply chain management. By productively using big data, healthcare organizations can gain significant benefits. Big data analytics in healthcare enables improved decisions by extracting insights from large volumes of data.

Keywords: RFID, data mining, data analysis, healthcare

Procedia PDF Downloads 235