Search results for: heterogeneous massive data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25280

Search results for: heterogeneous massive data

24440 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 131
24439 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 386
24438 Role of Institutional Quality as a Key Determinant of FDI Flows in Developing Asian Economies

Authors: Bikash Ranjan Mishra, Lopamudra D. Satpathy

Abstract:

In the wake of the phenomenal surge in international business in the last decades or more, both the developed and developing economies around the world are in massive competition to attract more and more FDI flows. While the developed countries have marched ahead in the race, the developing countries, especially those of Asian economies, have followed them at a rapid pace. While most of the previous studies have analysed the role of institutional quality in the promotion of FDI flows in developing countries, very few studies have taken an integrated approach of examining the comprehensive impact of institutional quality, globalization pattern and domestic financial development on FDI flows. In this context, the paper contributes to the literature in two important ways. Firstly, two composite indices of institutional quality and domestic financial development for the Asian countries are constructed in comparison to earlier studies that resort to a single variable for indicating the institutional quality and domestic financial development. Secondly, the impact of these variables on FDI flows through their interaction with geographical region is investigated. The study uses panel data covering the time period of 1996 to 2012 by selecting twenty Asian developing countries by emphasizing the quality of institutions from the geographical regions of eastern, south-eastern, southern and western Asia. Control of corruption, better rule of law, regulatory quality, effectiveness of the government, political stability and voice and accountability are used as indicators of institutional quality. Besides these, the study takes into account the domestic credits in the hands of public, private sectors and in stock markets as domestic financial indicators. First in the specification of model, a factor analysis is performed to reduce the vast determinants, which are highly correlated with each other, to a manageable size. Afterwards, a reduced version of the model is estimated with the extracted factors in the form of index as independent variables along with a set of control variables. It is found that the institutional quality index and index of globalization exert a significant effect on FDI inflows of the host countries; in contrast, the domestic financial index does not seem to play much worthy role. Finally, some robustness tests are performed to make sure that the results are not sensitive to temporal and spatial unobserved heterogeneity. On the basis of the above study, one general inference can be drawn from the policy prescription point of view that the government of these developing countries should strengthen their domestic institution, both financial and non-financial. In addition to these, welfare policies should also target for rapid globalization. If the financial and non-financial institutions of these developing countries become sound and grow more globalized in the economic, social and political domain, then they can appeal to more amounts of FDI inflows that will subsequently result in advancement of these economies.

Keywords: Asian developing economies, FDI, institutional quality, panel data

Procedia PDF Downloads 295
24437 The Behavior of Masonry Wall Constructed Using Biaxial Interlocking Concrete Block, Solid Concrete Block and Cement Sand Brick Subjected to the Compressive Load

Authors: Fauziah Aziz, Mohd.fadzil Arshad, Hazrina Mansor, Sedat Kömürcü

Abstract:

Masonry is an isotropic and heterogeneous material due to the presence of the different components within the assembly process. Normally the mortar plays a significant role in the compressive behavior of the traditional masonry structures. Biaxial interlocking concrete block is a masonry unit that comes out with the interlocking concept. This masonry unit can improve the quality of the construction process, reduce the cost of labor, reduce high skill workmanship, and speeding the construction time. Normally, the interlocking concrete block masonry unit in the market place was designed in a way interlocking concept only either x or y-axis, shorter in length, and low compressive strength value. However, the biaxial interlocking concrete block is a dry-stack concept being introduced in this research, offered the specialty compared to the normal interlocking concrete available in the market place due to its length and the geometry of the groove and tongue. This material can be used as a non-load bearing wall, or load-bearing wall depends on the application of the masonry. But, there is a lack of technical data that was produced before. This paper presents a finding on the compressive resistance of the biaxial interlocking concrete block masonry wall compared to the other traditional masonry walls. Two series of biaxial interlocking concrete block masonry walls, namely M1 and M2, a series of solid concrete block and cement sand brick walls M3, and M4 have tested the compressive resistance. M1 is the masonry wall of a hollow biaxial interlocking concrete block meanwhile; M2 is the grouted masonry wall, M3 is a solid concrete block masonry wall, and M4 is a cement sand brick masonry wall. All the samples were tested under static compressive load. The results examine that M2 is higher in compressive resistance compared to the M1, M3, and M4. It shows that the compressive strength of the concrete masonry units plays a significant role in the capacity of the masonry wall.

Keywords: interlocking concrete block, compressive resistance, concrete masonry unit, masonry

Procedia PDF Downloads 154
24436 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 104
24435 Unraveling the Political Complexities of the Textile and Clothing Waste Ecosystem; A Case Study on Melbourne Metropolitan Civic Waste Management Practices

Authors: Yasaman Samie

Abstract:

The ever-increasing rate of textile and clothing (T&C) waste generation and the common ineffective waste management practices have been for long a challenge for civic waste management. This challenge stems from not only the complexity in the T&C material components but also the heterogeneous nature of the T&C waste management sector and the disconnection between the stakeholders. To date, there is little research that investigates the importance of a governmental structure and its role in T&C waste managerial practices and decision makings. This paper reflects on the impacts and involvement of governments, the Acts, and legislation on the effectiveness of T&C waste management practices, which are carried out by multiple players in a city context. In doing so, this study first develops a methodical framework for holistically analyzing a city’s T&C waste ecosystem. Central to this framework are six dimensions: social, environmental, economic, political, cultural, and educational, as well as the connection between these dimensions such as Socio-Political and Cultural-Political. Second, it delves into the political dimension and its interconnections with varying aspects of T&C waste. In this manner, this case-study takes metropolitan Melbourne as a case and draws on social theories of Actor-Network Theory and the principals of supply chain design and planning. Data collection was through two rounds of semi-structured interviews with 18 key players of T&C waste ecosystem (including charities, city councils, private sector providers and producers) mainly within metropolitan Melbourne and also other Australian and European cities. Research findings expand on the role of the politics of waste in facilitating a proactive approach to T&C waste management in the cities. That is achieved through a revised definition for T&C waste and its characteristics, discussing the varying perceptions of value in waste, prioritizing waste types in civic waste management practices and how all these aspects shall be reflected in the in-placed acts and legislations.

Keywords: civic waste management, multi-stakeholder ecosystem, textile and clothing waste, waste and governments

Procedia PDF Downloads 97
24434 Credit Cooperatives: A Factor for Improving the Sustainable Management of Private Forests

Authors: Todor Nickolov Stoyanov

Abstract:

Cooperatives are present in all countries and in almost all sectors, including agriculture, forestry, food, finance, health, marketing, insurance and credit. Strong cooperatives are able to overcome many of the difficulties faced by private owners. Cooperatives use seven principles, including the 'Community Concern" principle, which enables cooperatives to work for the sustainable development of the community. The members of cooperatives may use different systems for generating year-round employment and for receiving sustainable income through performing different forestry activities. Various methods are used during the preparation of the report. These include literature reviews, statistics, secondary data and expert interviews. The members of the cooperatives are benefits exclusively from increasing the efficiency of the various products and from the overall yield of the harvest, and ultimately from achieving better profit through cooperative efforts. Cooperatives also use other types of activities that are an additional opportunity for cooperative income. There are many heterogeneous activities in the production and service sectors of the forest cooperatives under consideration. Some cooperatives serve dairies, distilleries, woodworking enterprises, tourist homes, hotels and motels, shops, ski slopes, sheep breeding, etc. Through the revenue generated by the activity, cooperatives have the opportunity to carry out various environmental and protective activities - recreation, water protection, protection of endangered and endemic species, etc., which in the case of small-scale forests cannot be achieved and the management is not sustainable. The conclusions indicate the results received in the analysis. Cooperative management of forests and forest lands gives higher incomes to individual owners. The management of forests and forest lands through cooperatives helps to carry out different environmental and protective activities. Cooperative forest management provides additional means of subsistence to the owners of poor forest lands. Cooperative management of forests and forest lands support owners to implement the forest management plans and to apply sustainable management of these territories.

Keywords: cooperative, forestry, forest owners, principles of cooperation

Procedia PDF Downloads 222
24433 Deep Learning Framework for Predicting Bus Travel Times with Multiple Bus Routes: A Single-Step Multi-Station Forecasting Approach

Authors: Muhammad Ahnaf Zahin, Yaw Adu-Gyamfi

Abstract:

Bus transit is a crucial component of transportation networks, especially in urban areas. Any intelligent transportation system must have accurate real-time information on bus travel times since it minimizes waiting times for passengers at different stations along a route, improves service reliability, and significantly optimizes travel patterns. Bus agencies must enhance the quality of their information service to serve their passengers better and draw in more travelers since people waiting at bus stops are frequently anxious about when the bus will arrive at their starting point and when it will reach their destination. For solving this issue, different models have been developed for predicting bus travel times recently, but most of them are focused on smaller road networks due to their relatively subpar performance in high-density urban areas on a vast network. This paper develops a deep learning-based architecture using a single-step multi-station forecasting approach to predict average bus travel times for numerous routes, stops, and trips on a large-scale network using heterogeneous bus transit data collected from the GTFS database. Over one week, data was gathered from multiple bus routes in Saint Louis, Missouri. In this study, Gated Recurrent Unit (GRU) neural network was followed to predict the mean vehicle travel times for different hours of the day for multiple stations along multiple routes. Historical time steps and prediction horizon were set up to 5 and 1, respectively, which means that five hours of historical average travel time data were used to predict average travel time for the following hour. The spatial and temporal information and the historical average travel times were captured from the dataset for model input parameters. As adjacency matrices for the spatial input parameters, the station distances and sequence numbers were used, and the time of day (hour) was considered for the temporal inputs. Other inputs, including volatility information such as standard deviation and variance of journey durations, were also included in the model to make it more robust. The model's performance was evaluated based on a metric called mean absolute percentage error (MAPE). The observed prediction errors for various routes, trips, and stations remained consistent throughout the day. The results showed that the developed model could predict travel times more accurately during peak traffic hours, having a MAPE of around 14%, and performed less accurately during the latter part of the day. In the context of a complicated transportation network in high-density urban areas, the model showed its applicability for real-time travel time prediction of public transportation and ensured the high quality of the predictions generated by the model.

Keywords: gated recurrent unit, mean absolute percentage error, single-step forecasting, travel time prediction.

Procedia PDF Downloads 57
24432 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings

Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski

Abstract:

Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.

Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound

Procedia PDF Downloads 312
24431 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 375
24430 New Heterogenous α-Diimine Nickel (II)/ MWCNT Catalysts for Ethylene Polymerization

Authors: Sasan Talebnezhad, Saeed Pormahdian, Naghi Assali

Abstract:

Homogeneous α-diimine nickel (II) catalyst complexes, with and without amino para-aryl position functionality, were synthesized. These complexes were immobilized on carboxyl, hydroxyl, and acyl chloride functionalized multi-walled carbon nanotubes to form five novel heterogeneous α-diiminonickel catalysts. Immobilization was performed by covalent or electrostatic bonding via methylaluminoxane (MAO) linker or amide linkage. Both the nature of α-diimine ligands and the kind of interaction between anchored catalyst complexes and multi-walled carbon nanotube surface influenced the catalytic performance, microstructure, and morphology of obtained polyethylenes. The catalyst prepared by amide bonding showed lowest relative weight loss in thermogravimetry analysis and highest activities up to 5863 gr PE mmol-1Ni.hr-1. This catalyst produced polyethylene with dense botryoidal morphology.

Keywords: α-diimine nickel (II) complexes, immobilization, multi-walled carbon nanotubes, ethylene polymerization

Procedia PDF Downloads 398
24429 New Heterogenous α-Diimine Nickel (II)/MWCNT Catalysts for Ethylene Polymerization

Authors: Sasan Talebnezhad, Saeed Pourmahdian, Naghi Assali

Abstract:

Homogeneous α-diimine nickel (II) catalyst complexes, with and without amino para-aryl position functionality, were synthesized. These complexes were immobilized on carboxyl, hydroxyl and acyl chloride functionalized multi-walled carbon nanotubes to form five novel heterogeneous α diiminonickel catalysts. Immobilization was performed by covalent or electrostatic bonding via methylaluminoxane (MAO) linker or amide linkage. Both the nature of α-diimine ligands and the kind of interaction between anchored catalyst complexes and multi-walled carbon nanotube surface influenced the catalytic performance, microstructure, and morphology of obtained polyethylenes. The catalyst prepared by amide bonding showed lowest relative weight loss in thermogravimetry analysis and highest activities up to 5863 gr PE mmol-1Ni.hr-1. This catalyst produced polyethylene with dense botryoidal morphology.

Keywords: α-diimine nickel (II) complexes, immobilization, multi-walled carbon nanotubes, ethylene polymerization

Procedia PDF Downloads 483
24428 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis

Authors: Nathainail Bashir, Neil Anderson

Abstract:

The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.

Keywords: dipole-dipole, ERT, Karst terrains, MASW

Procedia PDF Downloads 299
24427 Data Science in Military Decision-Making: A Semi-Systematic Literature Review

Authors: H. W. Meerveld, R. H. A. Lindelauf

Abstract:

In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.

Keywords: data science, decision-making, information superiority, literature review, military

Procedia PDF Downloads 145
24426 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA

Authors: Cai Qianyi

Abstract:

In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.

Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment

Procedia PDF Downloads 37
24425 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 264
24424 Numerical Analysis of the Response of Thin Flexible Membranes to Free Surface Water Flow

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This work is part of a major research project concerning the design of a light temporary installable textile flood control structure. The motivation for this work is the great need of applying light structures for the protection of coastal areas from detrimental effects of rapid water runoff. The prime objective of the study is the numerical analysis of the interaction among free surface water flow and slender shaped pliable structures, playing a key role in safety performance of the intended system. First, the behavior of down scale membrane is examined under hydrostatic pressure by the Abaqus explicit solver, which is part of the finite element based commercially available SIMULIA software. Then the procedure to achieve a stable and convergent solution for strongly coupled media including fluids and structures is explained. A partitioned strategy is imposed to make both structures and fluids be discretized and solved with appropriate formulations and solvers. In this regard, finite element method is again selected to analyze the structural domain. Moreover, computational fluid dynamics algorithms are introduced for solutions in flow domains by means of a commercial package of Star CCM+. Likewise, SIMULIA co-simulation engine and an implicit coupling algorithm, which are available communication tools in commercial package of the Star CCM+, enable powerful transmission of data between two applied codes. This approach is discussed for two different cases and compared with available experimental records. In one case, the down scale membrane interacts with open channel flow, where the flow velocity increases with time. The second case illustrates, how the full scale flexible flood barrier behaves when a massive flotsam is accelerated towards it.

Keywords: finite element formulation, finite volume algorithm, fluid-structure interaction, light pliable structure, VOF multiphase model

Procedia PDF Downloads 171
24423 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach

Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar

Abstract:

Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.

Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry

Procedia PDF Downloads 296
24422 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 359
24421 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 542
24420 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 295
24419 A Business-to-Business Collaboration System That Promotes Data Utilization While Encrypting Information on the Blockchain

Authors: Hiroaki Nasu, Ryota Miyamoto, Yuta Kodera, Yasuyuki Nogami

Abstract:

To promote Industry 4.0 and Society 5.0 and so on, it is important to connect and share data so that every member can trust it. Blockchain (BC) technology is currently attracting attention as the most advanced tool and has been used in the financial field and so on. However, the data collaboration using BC has not progressed sufficiently among companies on the supply chain of manufacturing industry that handle sensitive data such as product quality, manufacturing conditions, etc. There are two main reasons why data utilization is not sufficiently advanced in the industrial supply chain. The first reason is that manufacturing information is top secret and a source for companies to generate profits. It is difficult to disclose data even between companies with transactions in the supply chain. In the blockchain mechanism such as Bitcoin using PKI (Public Key Infrastructure), in order to confirm the identity of the company that has sent the data, the plaintext must be shared between the companies. Another reason is that the merits (scenarios) of collaboration data between companies are not specifically specified in the industrial supply chain. For these problems this paper proposes a Business to Business (B2B) collaboration system using homomorphic encryption and BC technique. Using the proposed system, each company on the supply chain can exchange confidential information on encrypted data and utilize the data for their own business. In addition, this paper considers a scenario focusing on quality data, which was difficult to collaborate because it is a top secret. In this scenario, we show a implementation scheme and a benefit of concrete data collaboration by proposing a comparison protocol that can grasp the change in quality while hiding the numerical value of quality data.

Keywords: business to business data collaboration, industrial supply chain, blockchain, homomorphic encryption

Procedia PDF Downloads 114
24418 Multivariate Assessment of Mathematics Test Scores of Students in Qatar

Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski

Abstract:

Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.

Keywords: cluster analysis, education, mathematics, profiles

Procedia PDF Downloads 109
24417 A Preliminary Study of Urban Resident Space Redundancy in the Context of Rapid Urbanization: Based on Urban Research of Hongkou District of Shanghai

Authors: Ziwei Chen, Yujiang Gao

Abstract:

The rapid urbanization has caused the massive physical space in Chinese cities to be in a state of duplication and dislocation through the rapid development, forming many daily spaces that cannot be standardized, typed, and identified, such as illegal construction. This phenomenon is known as urban spatial redundancy and is often excluded from mainstream architectural discussions because of its 'remaining' and 'excessive' derogatory label. In recent years, some practice architects have begun to pay attention to this phenomenon and tried to tap the value behind it. In this context, the author takes the redundancy phenomenon of resident space as the research object and explores the inspiration to the urban architectural renewal and the innovative residential area model, based on the urban survey of redundant living space in Hongkou District of Shanghai. On this basis, it shows that the changes accumulated in the long-term use of the building can be re-applied to the goals before the design, which is an important link and significance of the existence of an architecture.

Keywords: rapid urbanization, living space redundancy, architectural renewal, residential area model

Procedia PDF Downloads 119
24416 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 121
24415 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 128
24414 Forecasting Stock Indexes Using Bayesian Additive Regression Tree

Authors: Darren Zou

Abstract:

Forecasting the stock market is a very challenging task. Various economic indicators such as GDP, exchange rates, interest rates, and unemployment have a substantial impact on the stock market. Time series models are the traditional methods used to predict stock market changes. In this paper, a machine learning method, Bayesian Additive Regression Tree (BART) is used in predicting stock market indexes based on multiple economic indicators. BART can be used to model heterogeneous treatment effects, and thereby works well when models are misspecified. It also has the capability to handle non-linear main effects and multi-way interactions without much input from financial analysts. In this research, BART is proposed to provide a reliable prediction on day-to-day stock market activities. By comparing the analysis results from BART and with time series method, BART can perform well and has better prediction capability than the traditional methods.

Keywords: BART, Bayesian, predict, stock

Procedia PDF Downloads 108
24413 Canopy Temperature Acquired from Daytime and Nighttime Aerial Data as an Indicator of Trees’ Health Status

Authors: Agata Zakrzewska, Dominik Kopeć, Adrian Ochtyra

Abstract:

The growing number of new cameras, sensors, and research methods allow for a broader application of thermal data in remote sensing vegetation studies. The aim of this research was to check whether it is possible to use thermal infrared data with a spectral range (3.6-4.9 μm) obtained during the day and the night to assess the health condition of selected species of deciduous trees in an urban environment. For this purpose, research was carried out in the city center of Warsaw (Poland) in 2020. During the airborne data acquisition, thermal data, laser scanning, and orthophoto map images were collected. Synchronously with airborne data, ground reference data were obtained for 617 studied species (Acer platanoides, Acer pseudoplatanus, Aesculus hippocastanum, Tilia cordata, and Tilia × euchlora) in different health condition states. The results were as follows: (i) healthy trees are cooler than trees in poor condition and dying both in the daytime and nighttime data; (ii) the difference in the canopy temperatures between healthy and dying trees was 1.06oC of mean value on the nighttime data and 3.28oC of mean value on the daytime data; (iii) condition classes significantly differentiate on both daytime and nighttime thermal data, but only on daytime data all condition classes differed statistically significantly from each other. In conclusion, the aerial thermal data can be considered as an alternative to hyperspectral data, a method of assessing the health condition of trees in an urban environment. Especially data obtained during the day, which can differentiate condition classes better than data obtained at night. The method based on thermal infrared and laser scanning data fusion could be a quick and efficient solution for identifying trees in poor health that should be visually checked in the field.

Keywords: middle wave infrared, thermal imagery, tree discoloration, urban trees

Procedia PDF Downloads 100
24412 Harmonization of Financial Information Systems in Latin America in Light of International Public Sector Accounting Standards Using the Herfindahl-Hirschman Index

Authors: Laura Sour

Abstract:

Government accounting is an essential instrument of transparency and accountability in public administration, which allows connecting internal management with the implementation of policies and their evaluation by third parties through the construction of indicators on the cost of government. Several countries have adopted the International Public Sector Accounting Standards (IPSAS) as part of their modernization strategy. This document will evaluate the quantity and harmonization of the financial information published in the financial statements of 12 Latin American countries based on what is established in IPSAS 1, 2 and 17. For this, seven types of financial statements are analyzed. published during the period from 2015 to 2019. Based on this information, it will be possible to describe the evolution in the government financial publication to carry out a detailed analysis of the items that have been most transparent in these countries. Finally, the level of harmonization of the financial statements will be studied using the Herfindahl-Hirschman index (IHH) to determine the degree of comparability of the information. To date, the results indicate that the public sector has increased the quantity and harmonization of the financial information published during the study period, but in a heterogeneous way: From the data collected, it has been found that the financial statement published with greater frequency and quantity is the Income Statement (classification of expenses by nature). On the other hand, the most complete reports were published by Costa Rica (2017 to 2019) and Mexico (2016 to 2018), periods during which these countries complied with 92.9 percent of the items analyzed. Although 2017 and 2018 are the years in which the most financial statements were reported, it is important to mention that Mexico is the country that has published the most financial information throughout the entire study period. The use of the IHH is expected to provide accurate information on the quality with which countries have adopted IPSAS within their government accounting systems to promote transparency and accountability in the continent.

Keywords: accounting and auditing, government policy and regulation, harmonization, public sector accounting and audits IPSAS

Procedia PDF Downloads 73
24411 An Examination of Low Engagement in a Group-Based ACT Intervention for Chronic Pain Management: Highlighting the Need for User-Attainment Focused Digitalised Interventions

Authors: Orestis Kasinopoulos, Maria Karekla, Vasilis Vasiliou, Evangelos Karademas

Abstract:

Acceptance and Commitment Therapy (ACT) is an empirically supported intervention for treating Chronic Pain Patients, yet its effectiveness for some chronic conditions or when adapted to other languages, has not been explored. An ACT group intervention was designed to explore the effectiveness of treating a Greek speaking heterogeneous sample of Chronic Pain patients with the aim of increasing quality of life, acceptance of pain and functionality. Sixty-nine patients were assessed and randomly assigned to an ACT or control group (relaxation techniques) for eight, 90-minute, sessions. Results are currently being analysed and follow-ups (6 and 12 month) are being completed. Low adherence rates and high attrition rates observed in the study, however point to the direction of future modified interventions. Such modifications may include web-based and smartphone interventions and their benefits in being implemented in chronic pain patients.

Keywords: chronic pain, ACT, internet-delivered, digitalised intervention, adherence, attrition

Procedia PDF Downloads 349