Search results for: data protection act
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26081

Search results for: data protection act

25061 Big Data Analysis with Rhipe

Authors: Byung Ho Jung, Ji Eun Shin, Dong Hoon Lim

Abstract:

Rhipe that integrates R and Hadoop environment made it possible to process and analyze massive amounts of data using a distributed processing environment. In this paper, we implemented multiple regression analysis using Rhipe with various data sizes of actual data. Experimental results for comparing the performance of our Rhipe with stats and biglm packages available on bigmemory, showed that our Rhipe was more fast than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases. We also compared the computing speeds of pseudo-distributed and fully-distributed modes for configuring Hadoop cluster. The results showed that fully-distributed mode was faster than pseudo-distributed mode, and computing speeds of fully-distributed mode were faster as the number of data nodes increases.

Keywords: big data, Hadoop, Parallel regression analysis, R, Rhipe

Procedia PDF Downloads 487
25060 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 128
25059 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 385
25058 Reinforcing The Nagoya Protocol through a Coherent Global Intellectual Property Framework: Effective Protection for Traditional Knowledge Associated with Genetic Resources in Biodiverse African States

Authors: Oluwatobiloba Moody

Abstract:

On October 12, 2014, the Nagoya Protocol, negotiated by Parties to the Convention on Biological Diversity (CBD), entered into force. The Protocol was negotiated to implement the third objective of the CBD which relates to the fair and equitable sharing of benefits arising from the utilization of genetic resources (GRs). The Protocol aims to ‘protect’ GRs and traditional knowledge (TK) associated with GRs from ‘biopiracy’, through the establishment of a binding international regime on access and benefit sharing (ABS). In reflecting on the question of ‘effectiveness’ in the Protocol’s implementation, this paper argues that the underlying problem of ‘biopiracy’, which the Protocol seeks to address, is one which goes beyond the ABS regime. It rather thrives due to indispensable factors emanating from the global intellectual property (IP) regime. It contends that biopiracy therefore constitutes an international problem of ‘borders’ as much as of ‘regimes’ and, therefore, while the implementation of the Protocol may effectively address the ‘trans-border’ issues which have hitherto troubled African provider countries in their establishment of regulatory mechanisms, it remains unable to address the ‘trans-regime’ issues related to the eradication of biopiracy, especially those issues which involve the IP regime. This is due to the glaring incoherence in the Nagoya Protocol’s implementation and the existing global IP system. In arriving at conclusions, the paper examines the ongoing related discussions within the IP regime, specifically those within the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) and the WTO TRIPS Council. It concludes that the Protocol’s effectiveness in protecting TK associated with GRs is conditional on the attainment of outcomes, within the ongoing negotiations of the IP regime, which could be implemented in a coherent manner with the Nagoya Protocol. It proposes specific ways to achieve this coherence. Three main methodological steps have been incorporated in the paper’s development. First, a review of data accumulated over a two year period arising from the coordination of six important negotiating sessions of the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. In this respect, the research benefits from reflections on the political, institutional and substantive nuances which have coloured the IP negotiations and which provide both the context and subtext to emerging texts. Second, a desktop review of the history, nature and significance of the Nagoya Protocol, using relevant primary and secondary literature from international and national sources. Third, a comparative analysis of selected biopiracy cases is undertaken for the purpose of establishing the inseparability of the IP regime and the ABS regime in the conceptualization and development of solutions to biopiracy. A comparative analysis of select African regulatory mechanisms (Kenya, South Africa and Ethiopia and the ARIPO Swakopmund Protocol) for the protection of TK is also undertaken.

Keywords: biopiracy, intellectual property, Nagoya protocol, traditional knowledge

Procedia PDF Downloads 418
25057 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 102
25056 Exploring Forest Biomass Changes in Romania in the Last Three Decades

Authors: Remus Pravalie, Georgeta Bandoc

Abstract:

Forests are crucial for humanity and biodiversity, through the various ecosystem services and functions they provide all over the world. Forest ecosystems are vital in Romania as well, through their various benefits, known as provisioning (food, wood, or fresh water), regulating (water purification, soil protection, carbon sequestration or control of climate change, floods, and other hazards), cultural (aesthetic, spiritual, inspirational, recreational or educational benefits) and supporting (primary production, nutrient cycling, and soil formation processes, with direct or indirect importance for human well-being) ecosystem services. These ecological benefits are of great importance in Romania, especially given the fact that forests cover extensive areas countrywide, i.e. ~6.5 million ha or ~27.5% of the national territory. However, the diversity and functionality of these ecosystem services fundamentally depend on certain key attributes of forests, such as biomass, which has so far not been studied nationally in terms of potential changes due to climate change and other driving forces. This study investigates, for the first time, changes in forest biomass in Romania in recent decades, based on a high volume of satellite data (Landsat images at high spatial resolutions), downloaded from the Google Earth Engine platform and processed (using specialized software and methods) across Romanian forestland boundaries from 1987 to 2018. A complex climate database was also investigated across Romanian forests over the same 32-year period, in order to detect potential similarities and statistical relationships between the dynamics of biomass and climate data. The results obtained indicated considerable changes in forest biomass in Romania in recent decades, largely triggered by the climate change that affected the country after 1987. Findings on the complex pattern of recent forest changes in Romania, which will be presented in detail in this study, can be useful to national policymakers in the fields of forestry, climate, and sustainable development.

Keywords: forests, biomass, climate change, trends, romania

Procedia PDF Downloads 139
25055 Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior

Authors: Mohammad Ehsani, Iman Zarei, Soudabeh Moazemigoudarzi

Abstract:

The aim of this study is to determine Pro-Environmental Behavioral Intention of Mountain Hikers to the Theory of Planned Behavior. According to many researchers nature-based recreation activities play a significant role in the tourism industry and have provided myriad opportunities for the protection of natural areas. It is essential to investigate individuals' behavior during such activities to avoid further damage to precious and dwindling natural resources. This study develops a robust model that provides a comprehensive understanding of the formation of pro-environmental behavioral intentions among climbers of Mount Damavand National Park in Iran. To this end, we combined the theory of planned behavior (TPB), value-belief-norm theory (VBN), and a hierarchical model of leisure constraints to predict individuals’ pro-environmental hiking behavior during outdoor recreation. It was used structural equation modeling to test the theoretical framework. A sample of 787 climbers was analyzed. Among the theory of planned behavior variables, perceived behavioral control showed the strongest association with behavioral intention (β = .57). This relationship indicates that if people feel they can have fewer negative impacts on national resources while hiking, it will result in more environmentally acceptable behavior. Subjective norms had a moderate positive impact on behavioral intention, indicating the importance of other people on the individual's behavior. Attitude had a small positive effect on intention. Ecological worldview positively influenced attitude and personal belief. Personal belief (awareness of consequences and ascribed responsibility) showed a positive association with TPB variables. Although the data showed a high average score in awareness of consequences (mean = 4.219 out of 5), evidence from Damavand Mount shows that there are many environmental issues that need addressing (e.g., vast amounts of garbage). National park managers need to make sure that their solutions result in awareness about proenvironmental behavior (PEB). Findings showed that negative relationship between constraints and all TPB predictors. Providing proper restrooms and parking spaces in campgrounds, strategies controlling limiting capacity and solutions for removing waste from high altitudes are helpful to decrease the negative impact of structural constraints. In order to address intrapersonal constraints, managers should provide opportunities to interest individuals in environmental activities, such as environmental celebrations or making documentaries about environmental issues. Moreover, promoting a culture of environmental protection in the Damavand Mount area would reduce interpersonal constraints. Overall, the proposed model improved the explanatory power of the TPB by predicting 64.7% of intention compared to the original TPB that accounted for 63.8% of the variance in intention.

Keywords: theory of planned behavior, pro-environmental behavior, national park, constraints

Procedia PDF Downloads 82
25054 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering

Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining

Abstract:

DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.

Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)

Procedia PDF Downloads 260
25053 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 374
25052 Quantification of Leachate Potential of the Quezon City Controlled Dumping Facility Using Help Model

Authors: Paul Kenneth D. Luzon, Maria Antonia N. Tanchuling

Abstract:

The Quezon City Controlled Dumping facility also known as Payatas produces leachate which can contaminate soil and water environment in the area. The goal of this study is to quantify the leachate produced by the QCCDF using the Hydrologic Evaluation of Landfill Performance (HELP) model. Results could be used as input for groundwater contaminant transport studies. The HELP model is based on a simple water budget and is an essential “model requirement” used by the US Environmental Protection Agency (EPA). Annual waste profile of the QCCDF was calculated. Based on topographical maps and estimation of settlement due to overburden pressure and degradation, a total of 10M m^3 of waste is contained in the landfill. The input necessary for the HELP model are weather data, soil properties, and landfill design. Results showed that from 1988 to 2011, an average of 50% of the total precipitation percolates through the bottom layer. Validation of the results is still needed due to the assumptions made in the study. The decrease in porosity of the top soil cover showed the best mitigation for minimizing percolation rate. This study concludes that there is a need for better leachate management system in the QCCDF.

Keywords: help model, landfill, payatas trash slide, quezon city controlled dumping facility

Procedia PDF Downloads 274
25051 Assessing the Impact of Electronic Payment Systems on the Service Delivery of Banks: Case of Nigeria

Authors: Idris lawal

Abstract:

The most recent development in the Nigerian payment system is the venture into “electronic payment system”. Electronic payment system is simply a payment or monetary transaction made over the internet or a network of computers. This study was carried out in order to assess how electronic payment system has impacted on banks service delivery, to examine the efficiency of electronic payment system in Nigeria and to determine the level of customer’s satisfaction as a direct result of the deployment of electronic payment systems. The study was conducted using structured questionnaire distributed to 50 bank officials and customers of Access Bank plc. Chi-square(x2) was adopted for the purpose of data analysis. The result of the study showed that the development of electronic payment system offer great benefit to bank customers including; improved services, reduced turn-around time, ease of banking transaction, significant cost saving etc. The study recommend that customer protection laws should be properly put in place to safeguard the interest of end users of e-payment instruments, the banking industry and government should show strong commitment and effort to educate the populace on the benefit of patronizing e-payment system to facilitate economic development.

Keywords: electronic payment system, service delivery, bank, Nigeria

Procedia PDF Downloads 262
25050 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis

Authors: Nathainail Bashir, Neil Anderson

Abstract:

The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.

Keywords: dipole-dipole, ERT, Karst terrains, MASW

Procedia PDF Downloads 298
25049 Data Science in Military Decision-Making: A Semi-Systematic Literature Review

Authors: H. W. Meerveld, R. H. A. Lindelauf

Abstract:

In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.

Keywords: data science, decision-making, information superiority, literature review, military

Procedia PDF Downloads 143
25048 The Turkish Anti-Nuclear Platform: A Counter-Hegemonic Struggle

Authors: Sevgi Balkan-Sahin

Abstract:

The Justice and Development Party (AKP) government has included nuclear power as a major component of Turkey’s new energy strategy by promoting it as the only alternative for Turkey to diversify energy resources, trigger economic growth, and boost competitiveness of the country. The effective promotion of such a framing has created a hegemonic discourse around nuclear energy in Turkey. However, fiercely opposing the nuclear initiative of the government, the Turkish anti-nuclear platform (ANP) composed of more than 50 civil society groups has challenged the hegemonic discourse of the AKP government by presenting nuclear energy as dangerous for human health, human rights, and the protection of environment. Based on an engagement between Gramscian perspective and Laclau and Mouffe’s discourse theory, this paper considers the discourses of the Turkish anti-nuclear platform and its associated activities as a counter-hegemonic strategy to change the ‘common sense’ on nuclear energy in Turkey. Analyzing the data from interviews with the representatives of the anti-nuclear platform coupled with primary sources, such as Parliamentary Records and official statements by civil society organizations, the paper highlights how the anti-nuclear platform exercises power through counter-hegemonic discourses in terms of the delegitimization of nuclear energy in Turkey.

Keywords: counter-hegemony, discourse, nuclear energy, Turkey

Procedia PDF Downloads 330
25047 Unveiling the Realities of Marrying Too Young: Evidence from Child Brides in Sub-Saharan Africa and Infant Mortality Implications

Authors: Emmanuel Olamijuwon

Abstract:

Despite laws against child marriage - a violation against child rights, the practice remains widespread in sub-Saharan Africa and globally partly because of persistent poverty, gender inequality, protection and the need to reinforce family ties. Using pooled data from the recent demographic and health surveys of 20-sub-Saharan African countries with a regional representative sample of 36,943 girls under 18 years, this study explores the prevalence, pattern and infant mortality implications of this marriage type while also examining its regional variations. Indications from the study are that child marriage is still very high in the region with variations above one-tenth in West, Central and Southern Africa regions except in the East African region where only about 7% of children under 18 were already married. Preliminary findings also suggest that about one-in-ten infant deaths were to child brides many of whom were residing in poor households, rural residence, unemployed and have less than secondary education. Based on these findings, it is, therefore, important that government of African countries addresses critical issues through increased policies towards increasing enrollment of girl children in schools as many of these girls are not likely to have any economic benefit to the region if the observed pattern continues.

Keywords: child marriage, infant mortality, Africa, child brides

Procedia PDF Downloads 235
25046 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 264
25045 Legal Provisions on Child Pornography in Bangladesh: A Comparative Study on South Asian Landscape

Authors: Monira Nazmi Jahan, Nusrat Jahan Nishat

Abstract:

'Child Pornography' is a sex crime that portrays illegal images and videos of a minor over the Internet and now has become a social concern with the increase of commission of this crime. The major objective of this paper is to identify and examine the laws relating to child pornography in Bangladesh and to compare this with other South Asian countries. In Bangladesh to prosecute under child pornography, provisions have been made in ‘Digital Security Act, 2018’ where it has been defined as involving child in areas of child sexuality or in sexuality and whoever commits the crime will be punished for 10 years imprisonment or 10 lac taka fine. In India, the crime is dealt with ‘The Protection of Children from Sexual Offences Act, 2012’ (POSCO) where the offenders for commission of this crime has been divided separately and has provision for punishments starting from three years to rigorous life imprisonment and shall also be liable to fine. In the Maldives, there is ‘Special Provisions Act to Deal with Child Sex Abuse Offenders, Act number 12/2009’. In this act it has been provided that a person is guilty of such an act if intentionally runs child prostitution, involves child in the creation of pornography or displays child’s sexual organ in pornography then shall be punished between 20 to 25 years of imprisonment. Nepal prosecutes this crime through ‘Act Relating to Children, 2018’ and the conviction of using child in prostitution or sexual services is imprisonment up to fifteen years and fine up to one hundred fifty thousand rupees. In Pakistan, child pornography is prosecuted with ‘Pakistan Penal Code Child Abuse Amendment Act, 2016’. This provides that one is guilty of this offence if he involves child with or without consent in such activities. It provides punishment for two to seven years of imprisonment or fine from two hundred thousand to seven hundred thousand rupees. In Bhutan child pornography is not explicitly addressed under the municipal laws. The Penal Code of Bhutan penalizes all kinds of pornography including child pornography under the provisions of computer pornography and the offence shall be a misdemeanor. Child Pornography is also prohibited under the ‘Child Care and Protection Act’. In Sri Lanka, ‘The Penal Code’ de facto criminalizes child prohibition and has a penalty of two to ten years and may also be liable to fine. The most shocking scenario exists in Afghanistan. There is no specific law for the protection of children from pornography, whereas this serious crime is present there. This paper will be conducted through a qualitative research method that is, the primary sources will be laws, and secondary sources will be journal articles and newspapers. The conclusion that can be drawn is except Afghanistan all other South Asian countries have laws for controlling this crime but still have loopholes. India has the most amended provisions. Nepal has no provision for fine, and Bhutan does not mention any specific punishment. Bangladesh compared to these countries, has a good piece of law; however, it also has space to broaden the laws for controlling child pornography.

Keywords: child abuse, child pornography, life imprisonment, penal code, South Asian countries

Procedia PDF Downloads 204
25044 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach

Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar

Abstract:

Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.

Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry

Procedia PDF Downloads 294
25043 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 357
25042 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 540
25041 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 294
25040 A Business-to-Business Collaboration System That Promotes Data Utilization While Encrypting Information on the Blockchain

Authors: Hiroaki Nasu, Ryota Miyamoto, Yuta Kodera, Yasuyuki Nogami

Abstract:

To promote Industry 4.0 and Society 5.0 and so on, it is important to connect and share data so that every member can trust it. Blockchain (BC) technology is currently attracting attention as the most advanced tool and has been used in the financial field and so on. However, the data collaboration using BC has not progressed sufficiently among companies on the supply chain of manufacturing industry that handle sensitive data such as product quality, manufacturing conditions, etc. There are two main reasons why data utilization is not sufficiently advanced in the industrial supply chain. The first reason is that manufacturing information is top secret and a source for companies to generate profits. It is difficult to disclose data even between companies with transactions in the supply chain. In the blockchain mechanism such as Bitcoin using PKI (Public Key Infrastructure), in order to confirm the identity of the company that has sent the data, the plaintext must be shared between the companies. Another reason is that the merits (scenarios) of collaboration data between companies are not specifically specified in the industrial supply chain. For these problems this paper proposes a Business to Business (B2B) collaboration system using homomorphic encryption and BC technique. Using the proposed system, each company on the supply chain can exchange confidential information on encrypted data and utilize the data for their own business. In addition, this paper considers a scenario focusing on quality data, which was difficult to collaborate because it is a top secret. In this scenario, we show a implementation scheme and a benefit of concrete data collaboration by proposing a comparison protocol that can grasp the change in quality while hiding the numerical value of quality data.

Keywords: business to business data collaboration, industrial supply chain, blockchain, homomorphic encryption

Procedia PDF Downloads 113
25039 Multivariate Assessment of Mathematics Test Scores of Students in Qatar

Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski

Abstract:

Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.

Keywords: cluster analysis, education, mathematics, profiles

Procedia PDF Downloads 107
25038 Determination of Authorship of the Works Created by the Artificial Intelligence

Authors: Vladimir Sharapaev

Abstract:

This paper seeks to address the question of the authorship of copyrighted works created solely by the artificial intelligence or with the use thereof, and proposes possible interpretational or legislative solutions to the problems arising from the plurality of the persons potentially involved in the ultimate creation of the work and division of tasks among such persons. Being based on the commonly accepted assumption that a copyrighted work can only be created by a natural person, the paper does not deal with the issues regarding the creativity of the artificial intelligence per se (or the lack thereof), and instead focuses on the distribution of the intellectual property rights potentially belonging to the creators of the artificial intelligence and/or the creators of the content used for the formation of the copyrighted work. Moreover, the technical development and rapid improvement of the AI-based programmes, which tend to be reaching even greater independence on a human being, give rise to the question whether the initial creators of the artificial intelligence can be entitled to the intellectual property rights to the works created by such AI at all. As the juridical practice of some European courts and legal doctrine tends to incline to the latter opinion, indicating that the works created by the AI may not at all enjoy copyright protection, the questions of authorships appear to be causing great concerns among the investors in the development of the relevant technology. Although the technology companies dispose with further instruments of protection of their investments, the risk of the works in question not being copyrighted caused by the inconsistency of the case law and a certain research gap constitutes a highly important issue. In order to assess the possible interpretations, the author adopted a doctrinal and analytical approach to the research, systematically analysing the European and Czech copyright laws and case law in some EU jurisdictions. This study aims to contribute to greater legal certainty regarding the issues of the authorship of the AI-created works and define possible clues for further research.

Keywords: artificial intelligence, copyright, authorship, copyrighted work, intellectual property

Procedia PDF Downloads 107
25037 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 119
25036 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 126
25035 Canopy Temperature Acquired from Daytime and Nighttime Aerial Data as an Indicator of Trees’ Health Status

Authors: Agata Zakrzewska, Dominik Kopeć, Adrian Ochtyra

Abstract:

The growing number of new cameras, sensors, and research methods allow for a broader application of thermal data in remote sensing vegetation studies. The aim of this research was to check whether it is possible to use thermal infrared data with a spectral range (3.6-4.9 μm) obtained during the day and the night to assess the health condition of selected species of deciduous trees in an urban environment. For this purpose, research was carried out in the city center of Warsaw (Poland) in 2020. During the airborne data acquisition, thermal data, laser scanning, and orthophoto map images were collected. Synchronously with airborne data, ground reference data were obtained for 617 studied species (Acer platanoides, Acer pseudoplatanus, Aesculus hippocastanum, Tilia cordata, and Tilia × euchlora) in different health condition states. The results were as follows: (i) healthy trees are cooler than trees in poor condition and dying both in the daytime and nighttime data; (ii) the difference in the canopy temperatures between healthy and dying trees was 1.06oC of mean value on the nighttime data and 3.28oC of mean value on the daytime data; (iii) condition classes significantly differentiate on both daytime and nighttime thermal data, but only on daytime data all condition classes differed statistically significantly from each other. In conclusion, the aerial thermal data can be considered as an alternative to hyperspectral data, a method of assessing the health condition of trees in an urban environment. Especially data obtained during the day, which can differentiate condition classes better than data obtained at night. The method based on thermal infrared and laser scanning data fusion could be a quick and efficient solution for identifying trees in poor health that should be visually checked in the field.

Keywords: middle wave infrared, thermal imagery, tree discoloration, urban trees

Procedia PDF Downloads 98
25034 "Good" Discretion Among Private Sector Street Level Bureaucrats

Authors: Anna K. Wood, Terri Friedline

Abstract:

In April and May 2020, the private banking industry approved over 1.7 million emergency small business loans, totaling over $650 billion in federal relief funds as part of the Paycheck Protection Program (PPP). Since the program’s rollout, the extensive evidence of discriminatory lending and misuse of funds has been revealed by investigative journalism and academic studies. This study is based on 41 interviews with frontline banking industry professionals conducted during the days and weeks of the PPP rollout, presenting a real-time narrative of the program rollout through the eyes of those in the role of a street-level bureaucrat. We present two themes from this data about the conditions under which these frontline workers experienced the PPP: Exigent Timelines and Defaulting to Existing Workplace Norms and Practices. We analyze these themes using literature on street-level organizations, bureaucratic discretion, and the differences between public and private sector logic. The results of this study present new directions for theorizing sector-level differences in street-level bureaucratic discretion in the context of mixed-sector collaboration on public service delivery, particularly under conditions of crisis and urgency.

Keywords: street level bureaucracy, social policy, bureaucratic discretion, public private partnerships

Procedia PDF Downloads 86
25033 Hierarchical Clustering Algorithms in Data Mining

Authors: Z. Abdullah, A. R. Hamdan

Abstract:

Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.

Keywords: clustering, unsupervised learning, algorithms, hierarchical

Procedia PDF Downloads 861
25032 End to End Monitoring in Oracle Fusion Middleware for Data Verification

Authors: Syed Kashif Ali, Usman Javaid, Abdullah Chohan

Abstract:

In large enterprises multiple departments use different sort of information systems and databases according to their needs. These systems are independent and heterogeneous in nature and sharing information/data between these systems is not an easy task. The usage of middleware technologies have made data sharing between systems very easy. However, monitoring the exchange of data/information for verification purposes between target and source systems is often complex or impossible for maintenance department due to security/access privileges on target and source systems. In this paper, we are intended to present our experience of an end to end data monitoring approach at middle ware level implemented in Oracle BPEL for data verification without any help of monitoring tool.

Keywords: service level agreement, SOA, BPEL, oracle fusion middleware, web service monitoring

Procedia PDF Downloads 462