Search results for: spatial and temporal data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26821

Search results for: spatial and temporal data

24751 Survival Data with Incomplete Missing Categorical Covariates

Authors: Madaki Umar Yusuf, Mohd Rizam B. Abubakar

Abstract:

The survival censored data with incomplete covariate data is a common occurrence in many studies in which the outcome is survival time. With model when the missing covariates are categorical, a useful technique for obtaining parameter estimates is the EM by the method of weights. The survival outcome for the class of generalized linear model is applied and this method requires the estimation of the parameters of the distribution of the covariates. In this paper, we propose some clinical trials with ve covariates, four of which have some missing values which clearly show that they were fully censored data.

Keywords: EM algorithm, incomplete categorical covariates, ignorable missing data, missing at random (MAR), Weibull Distribution

Procedia PDF Downloads 405
24750 A Study of Blockchain Oracles

Authors: Abdeljalil Beniiche

Abstract:

The limitation with smart contracts is that they cannot access external data that might be required to control the execution of business logic. Oracles can be used to provide external data to smart contracts. An oracle is an interface that delivers data from external data outside the blockchain to a smart contract to consume. Oracle can deliver different types of data depending on the industry and requirements. In this paper, we study and describe the widely used blockchain oracles. Then, we elaborate on his potential role, technical architecture, and design patterns. Finally, we discuss the human oracle and its key role in solving the truth problem by reaching a consensus about a certain inquiry and tasks.

Keywords: blockchain, oracles, oracles design, human oracles

Procedia PDF Downloads 135
24749 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality

Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard

Abstract:

Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.

Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus

Procedia PDF Downloads 573
24748 Modeling Karachi Dengue Outbreak and Exploration of Climate Structure

Authors: Syed Afrozuddin Ahmed, Junaid Saghir Siddiqi, Sabah Quaiser

Abstract:

Various studies have reported that global warming causes unstable climate and many serious impact to physical environment and public health. The increasing incidence of dengue incidence is now a priority health issue and become a health burden of Pakistan. In this study it has been investigated that spatial pattern of environment causes the emergence or increasing rate of dengue fever incidence that effects the population and its health. The climatic or environmental structure data and the Dengue Fever (DF) data was processed by coding, editing, tabulating, recoding, restructuring in terms of re-tabulating was carried out, and finally applying different statistical methods, techniques, and procedures for the evaluation. Five climatic variables which we have studied are precipitation (P), Maximum temperature (Mx), Minimum temperature (Mn), Humidity (H) and Wind speed (W) collected from 1980-2012. The dengue cases in Karachi from 2010 to 2012 are reported on weekly basis. Principal component analysis is applied to explore the climatic variables and/or the climatic (structure) which may influence in the increase or decrease in the number of dengue fever cases in Karachi. PC1 for all the period is General atmospheric condition. PC2 for dengue period is contrast between precipitation and wind speed. PC3 is the weighted difference between maximum temperature and wind speed. PC4 for dengue period contrast between maximum and wind speed. Negative binomial and Poisson regression model are used to correlate the dengue fever incidence to climatic variable and principal component score. Relative humidity is estimated to positively influence on the chances of dengue occurrence by 1.71% times. Maximum temperature positively influence on the chances dengue occurrence by 19.48% times. Minimum temperature affects positively on the chances of dengue occurrence by 11.51% times. Wind speed is effecting negatively on the weekly occurrence of dengue fever by 7.41% times.

Keywords: principal component analysis, dengue fever, negative binomial regression model, poisson regression model

Procedia PDF Downloads 445
24747 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 329
24746 Multi Data Management Systems in a Cluster Randomized Trial in Poor Resource Setting: The Pneumococcal Vaccine Schedules Trial

Authors: Abdoullah Nyassi, Golam Sarwar, Sarra Baldeh, Mamadou S. K. Jallow, Bai Lamin Dondeh, Isaac Osei, Grant A. Mackenzie

Abstract:

A randomized controlled trial is the "gold standard" for evaluating the efficacy of an intervention. Large-scale, cluster-randomized trials are expensive and difficult to conduct, though. To guarantee the validity and generalizability of findings, high-quality, dependable, and accurate data management systems are necessary. Robust data management systems are crucial for optimizing and validating the quality, accuracy, and dependability of trial data. Regarding the difficulties of data gathering in clinical trials in low-resource areas, there is a scarcity of literature on this subject, which may raise concerns. Effective data management systems and implementation goals should be part of trial procedures. Publicizing the creative clinical data management techniques used in clinical trials should boost public confidence in the study's conclusions and encourage further replication. In the ongoing pneumococcal vaccine schedule study in rural Gambia, this report details the development and deployment of multi-data management systems and methodologies. We implemented six different data management, synchronization, and reporting systems using Microsoft Access, RedCap, SQL, Visual Basic, Ruby, and ASP.NET. Additionally, data synchronization tools were developed to integrate data from these systems into the central server for reporting systems. Clinician, lab, and field data validation systems and methodologies are the main topics of this report. Our process development efforts across all domains were driven by the complexity of research project data collected in real-time data, online reporting, data synchronization, and ways for cleaning and verifying data. Consequently, we effectively used multi-data management systems, demonstrating the value of creative approaches in enhancing the consistency, accuracy, and reporting of trial data in a poor resource setting.

Keywords: data management, data collection, data cleaning, cluster-randomized trial

Procedia PDF Downloads 27
24745 Geospatial Multi-Criteria Evaluation to Predict Landslide Hazard Potential in the Catchment of Lake Naivasha, Kenya

Authors: Abdel Rahman Khider Hassan

Abstract:

This paper describes a multi-criteria geospatial model for prediction of landslide hazard zonation (LHZ) for Lake Naivasha catchment (Kenya), based on spatial analysis of integrated datasets of location intrinsic parameters (slope stability factors) and external landslides triggering factors (natural and man-made factors). The intrinsic dataset included: lithology, geometry of slope (slope inclination, aspect, elevation, and curvature) and land use/land cover. The landslides triggering factors included: rainfall as the climatic factor, in addition to the destructive effects reflected by proximity of roads and drainage network to areas that are susceptible to landslides. No published study on landslides has been obtained for this area. Thus, digital datasets of the above spatial parameters were conveniently acquired, stored, manipulated and analyzed in a Geographical Information System (GIS) using a multi-criteria grid overlay technique (in ArcGIS 10.2.2 environment). Deduction of landslide hazard zonation is done by applying weights based on relative contribution of each parameter to the slope instability, and finally, the weighted parameters grids were overlaid together to generate a map of the potential landslide hazard zonation (LHZ) for the lake catchment. From the total surface of 3200 km² of the lake catchment, most of the region (78.7 %; 2518.4 km²) is susceptible to moderate landslide hazards, whilst about 13% (416 km²) is occurring under high hazards. Only 1.0% (32 km²) of the catchment is displaying very high landslide hazards, and the remaining area (7.3 %; 233.6 km²) displays low probability of landslide hazards. This result confirms the importance of steep slope angles, lithology, vegetation land cover and slope orientation (aspect) as the major determining factors of slope failures. The information provided by the produced map of landslide hazard zonation (LHZ) could lay the basis for decision making as well as mitigation and applications in avoiding potential losses caused by landslides in the Lake Naivasha catchment in the Kenya Highlands.

Keywords: decision making, geospatial, landslide, multi-criteria, Naivasha

Procedia PDF Downloads 206
24744 Implementation and Design of Fuzzy Controller for High Performance Dc-Dc Boost Converters

Authors: A. Mansouri, F. Krim

Abstract:

This paper discusses the implementation and design of both linear PI and fuzzy controllers for DC-DC boost converters. Design of PI controllers is based on temporal response of closed-loop converters, while fuzzy controllers design is based on heuristic knowledge of boost converters. Linear controller implementation is quite straightforward relying on mathematical models, while fuzzy controller implementation employs one or more artificial intelligences techniques. Comparison between these boost controllers is made in design aspect. Experimental results show that the proposed fuzzy controller system is robust against input voltage and load resistance changing and in respect of start-up transient. Results indicate that fuzzy controller can achieve best control performance concerning faster transient response, steady-state response good stability and accuracy under different operating conditions. Fuzzy controller is more suitable to control boost converters.

Keywords: boost DC-DC converter, fuzzy, PI controllers, power electronics and control system

Procedia PDF Downloads 475
24743 Finding Bicluster on Gene Expression Data of Lymphoma Based on Singular Value Decomposition and Hierarchical Clustering

Authors: Alhadi Bustaman, Soeganda Formalidin, Titin Siswantining

Abstract:

DNA microarray technology is used to analyze thousand gene expression data simultaneously and a very important task for drug development and test, function annotation, and cancer diagnosis. Various clustering methods have been used for analyzing gene expression data. However, when analyzing very large and heterogeneous collections of gene expression data, conventional clustering methods often cannot produce a satisfactory solution. Biclustering algorithm has been used as an alternative approach to identifying structures from gene expression data. In this paper, we introduce a transform technique based on singular value decomposition to identify normalized matrix of gene expression data followed by Mixed-Clustering algorithm and the Lift algorithm, inspired in the node-deletion and node-addition phases proposed by Cheng and Church based on Agglomerative Hierarchical Clustering (AHC). Experimental study on standard datasets demonstrated the effectiveness of the algorithm in gene expression data.

Keywords: agglomerative hierarchical clustering (AHC), biclustering, gene expression data, lymphoma, singular value decomposition (SVD)

Procedia PDF Downloads 278
24742 An Efficient Traceability Mechanism in the Audited Cloud Data Storage

Authors: Ramya P, Lino Abraham Varghese, S. Bose

Abstract:

By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.

Keywords: data integrity, dynamic group, group signature, public auditing

Procedia PDF Downloads 392
24741 Securing Health Monitoring in Internet of Things with Blockchain-Based Proxy Re-Encryption

Authors: Jerlin George, R. Chitra

Abstract:

The devices with sensors that can monitor your temperature, heart rate, and other vital signs and link to the internet, known as the Internet of Things (IoT), have completely transformed the way we control health. Providing real-time health data, these sensors improve diagnostics and treatment outcomes. Security and privacy matters when IoT comes into play in healthcare. Cyberattacks on centralized database systems are also a problem. To solve these challenges, the study uses blockchain technology coupled with proxy re-encryption to secure health data. ThingSpeak IoT cloud analyzes the collected data and turns them into blockchain transactions which are safely kept on the DriveHQ cloud. Transparency and data integrity are ensured by blockchain, and secure data sharing among authorized users is made possible by proxy re-encryption. This results in a health monitoring system that preserves the accuracy and confidentiality of data while reducing the safety risks of IoT-driven healthcare applications.

Keywords: internet of things, healthcare, sensors, electronic health records, blockchain, proxy re-encryption, data privacy, data security

Procedia PDF Downloads 16
24740 Gravity and Geodetic Control of Geodynamic Activity near Aswan Lake, Egypt

Authors: Anwar H. Radwan, Jan Mrlina, El-Sayed A. Issawy, Ali Rayan, Salah M. Mahmoud

Abstract:

Geodynamic investigations in the Aswan Lake region were started after the M=5.5 earthquake in 1981, triggered by the lake water fluctuations. Besides establishing the seismological networks, also the geodetic observations focused on the Kalabsha and Sayal fault zones were started. It was found that the Kalabsha fault is an active dextral strike-slip with normal component indicating uplift on its southern side. However, the annual velocity rates in both components do not exceed 2 mm/y, and do not therefore represent extremely active faulting. We also launched gravity monitoring in 1997, and performed another two campaigns in 2000 and 2002. The observed non- tidal temporal gravity changes indicate rather the flood water infiltration into the porous Nubian sandstone, than tectonic stress effect. The station nearest to the lake exhibited about 60 μGal positive gravity change within the 1997-2002 period.

Keywords: gravity monitoring, surface movements, Lake Aswan, groundwater change

Procedia PDF Downloads 502
24739 Rodriguez Diego, Del Valle Martin, Hargreaves Matias, Riveros Jose Luis

Authors: Nathainail Bashir, Neil Anderson

Abstract:

The objective of this study site was to investigate the current state of the practice with regards to karst detection methods and recommend the best method and pattern of arrays to acquire the desire results. Proper site investigation in karst prone regions is extremely valuable in determining the location of possible voids. Two geophysical techniques were employed: multichannel analysis of surface waves (MASW) and electric resistivity tomography (ERT).The MASW data was acquired at each test location using different array lengths and different array orientations (to increase the probability of getting interpretable data in karst terrain). The ERT data were acquired using a dipole-dipole array consisting of 168 electrodes. The MASW data was interpreted (re: estimated depth to physical top of rock) and used to constrain and verify the interpretation of the ERT data. The ERT data indicates poorer quality MASW data were acquired in areas where there was significant local variation in the depth to top of rock.

Keywords: dipole-dipole, ERT, Karst terrains, MASW

Procedia PDF Downloads 315
24738 Data Science in Military Decision-Making: A Semi-Systematic Literature Review

Authors: H. W. Meerveld, R. H. A. Lindelauf

Abstract:

In contemporary warfare, data science is crucial for the military in achieving information superiority. Yet, to the authors’ knowledge, no extensive literature survey on data science in military decision-making has been conducted so far. In this study, 156 peer-reviewed articles were analysed through an integrative, semi-systematic literature review to gain an overview of the topic. The study examined to what extent literature is focussed on the opportunities or risks of data science in military decision-making, differentiated per level of war (i.e. strategic, operational, and tactical level). A relatively large focus on the risks of data science was observed in social science literature, implying that political and military policymakers are disproportionally influenced by a pessimistic view on the application of data science in the military domain. The perceived risks of data science are, however, hardly addressed in formal science literature. This means that the concerns on the military application of data science are not addressed to the audience that can actually develop and enhance data science models and algorithms. Cross-disciplinary research on both the opportunities and risks of military data science can address the observed research gaps. Considering the levels of war, relatively low attention for the operational level compared to the other two levels was observed, suggesting a research gap with reference to military operational data science. Opportunities for military data science mostly arise at the tactical level. On the contrary, studies examining strategic issues mostly emphasise the risks of military data science. Consequently, domain-specific requirements for military strategic data science applications are hardly expressed. Lacking such applications may ultimately lead to a suboptimal strategic decision in today’s warfare.

Keywords: data science, decision-making, information superiority, literature review, military

Procedia PDF Downloads 167
24737 All for One, or One for All: A Comparative Evaluation of the Role of Social Relations in Explaining Individual versus Group Participation in the Radicalization and Terrorism Process amongst Far-Right Actors in the United States

Authors: Jack Wippell

Abstract:

This paper explores the role of social relations in explaining far-right actors' decisions over whether to travel down the process of radicalization and terrorism alone or in a group. To this end, a joint comparative method of difference and agreement is applied to four case studies across key temporal points on the radicalization and terrorism pathway to generate a theory of why individuals decide to radicalize and/or act alone or within a group. While social relations are identified as playing some moderate role in explaining why certain individuals radicalize alone versus within a group, this paper argues they play a significant role in affecting decisions over whether to act alone versus in a group. These findings suggest new avenues of distinction and understanding of far-right actors' processes of radicalization and terrorism, in turn, offering potential new avenues for counter-terrorism policy and important areas for future research.

Keywords: far-right, lone-actor, terrorism, United States, radicalization

Procedia PDF Downloads 90
24736 Legal Regulation of Personal Information Data Transmission Risk Assessment: A Case Study of the EU’s DPIA

Authors: Cai Qianyi

Abstract:

In the midst of global digital revolution, the flow of data poses security threats that call China's existing legislative framework for protecting personal information into question. As a preliminary procedure for risk analysis and prevention, the risk assessment of personal data transmission lacks detailed guidelines for support. Existing provisions reveal unclear responsibilities for network operators and weakened rights for data subjects. Furthermore, the regulatory system's weak operability and a lack of industry self-regulation heighten data transmission hazards. This paper aims to compare the regulatory pathways for data information transmission risks between China and Europe from a legal framework and content perspective. It draws on the “Data Protection Impact Assessment Guidelines” to empower multiple stakeholders, including data processors, controllers, and subjects, while also defining obligations. In conclusion, this paper intends to solve China's digital security shortcomings by developing a more mature regulatory framework and industry self-regulation mechanisms, resulting in a win-win situation for personal data protection and the development of the digital economy.

Keywords: personal information data transmission, risk assessment, DPIA, internet service provider, personal information data transimission, risk assessment

Procedia PDF Downloads 61
24735 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 277
24734 Hydrological Characterization of a Watershed for Streamflow Prediction

Authors: Oseni Taiwo Amoo, Bloodless Dzwairo

Abstract:

In this paper, we extend the versatility and usefulness of GIS as a methodology for any river basin hydrologic characteristics analysis (HCA). The Gurara River basin located in North-Central Nigeria is presented in this study. It is an on-going research using spatial Digital Elevation Model (DEM) and Arc-Hydro tools to take inventory of the basin characteristics in order to predict water abstraction quantification on streamflow regime. One of the main concerns of hydrological modelling is the quantification of runoff from rainstorm events. In practice, the soil conservation service curve (SCS) method and the Conventional procedure called rational technique are still generally used these traditional hydrological lumped models convert statistical properties of rainfall in river basin to observed runoff and hydrograph. However, the models give little or no information about spatially dispersed information on rainfall and basin physical characteristics. Therefore, this paper synthesizes morphometric parameters in generating runoff. The expected results of the basin characteristics such as size, area, shape, slope of the watershed and stream distribution network analysis could be useful in estimating streamflow discharge. Water resources managers and irrigation farmers could utilize the tool for determining net return from available scarce water resources, where past data records are sparse for the aspect of land and climate.

Keywords: hydrological characteristic, stream flow, runoff discharge, land and climate

Procedia PDF Downloads 341
24733 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach

Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar

Abstract:

Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.

Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry

Procedia PDF Downloads 316
24732 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: aggregate data, combined-level data, individual patient data, meta-analysis

Procedia PDF Downloads 375
24731 Climatic and Environmental Factors Affecting Human Comfort Evaluation: Case Study of Shiraz Iran

Authors: Hamid Yazdani, Fatemeh Abbasi

Abstract:

Understanding the natural potentials, as the basis for the prevailing context of human activities, environmental planning, and land-use form shows. In this regard, regional characteristics and spatial distribution of the dominant elements in shaping human behavior and environment play a role Knndhayy. As far as today's studies of human Byvklymay basis for urban planning, settlement, architecture, Tourism and so on. In this study, comfort or lack of comfort in Shiraz in Horn of models and indices based on eco-Baker, Trjvng, were examined and the best time to do-using 39 years of data (TCI) stress, and the effective temperature Environmental activities and tourism in the city was established. The results showed that the worth parameters used, the ability to detect Terms of comfort and discomfort are Shiraz, despite minor differences, relatively homogenous aspects of the city provide a comfortable climate. Studies showed that having diversity in the worth of Shiraz during the year, the situation is heating up much coolness; during winter and summer Find out eco comfort zone and during the transition from cold to warm in spring and autumn (April) and warm to cold (November) climate Iran is close to human comfort. Totally, unique human comfort conditions in spring, the best season for environmental activities Tourism in Shiraz.

Keywords: BIO comfort Klymayy, Trjvng, baker, effective temperature, stress and (TCI)

Procedia PDF Downloads 348
24730 Analyzing On-Line Process Data for Industrial Production Quality Control

Authors: Hyun-Woo Cho

Abstract:

The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.

Keywords: detection, filtering, monitoring, process data

Procedia PDF Downloads 559
24729 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings

Authors: Stefan Peters, Phillip Roetman

Abstract:

The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.

Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor

Procedia PDF Downloads 115
24728 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 313
24727 A Business-to-Business Collaboration System That Promotes Data Utilization While Encrypting Information on the Blockchain

Authors: Hiroaki Nasu, Ryota Miyamoto, Yuta Kodera, Yasuyuki Nogami

Abstract:

To promote Industry 4.0 and Society 5.0 and so on, it is important to connect and share data so that every member can trust it. Blockchain (BC) technology is currently attracting attention as the most advanced tool and has been used in the financial field and so on. However, the data collaboration using BC has not progressed sufficiently among companies on the supply chain of manufacturing industry that handle sensitive data such as product quality, manufacturing conditions, etc. There are two main reasons why data utilization is not sufficiently advanced in the industrial supply chain. The first reason is that manufacturing information is top secret and a source for companies to generate profits. It is difficult to disclose data even between companies with transactions in the supply chain. In the blockchain mechanism such as Bitcoin using PKI (Public Key Infrastructure), in order to confirm the identity of the company that has sent the data, the plaintext must be shared between the companies. Another reason is that the merits (scenarios) of collaboration data between companies are not specifically specified in the industrial supply chain. For these problems this paper proposes a Business to Business (B2B) collaboration system using homomorphic encryption and BC technique. Using the proposed system, each company on the supply chain can exchange confidential information on encrypted data and utilize the data for their own business. In addition, this paper considers a scenario focusing on quality data, which was difficult to collaborate because it is a top secret. In this scenario, we show a implementation scheme and a benefit of concrete data collaboration by proposing a comparison protocol that can grasp the change in quality while hiding the numerical value of quality data.

Keywords: business to business data collaboration, industrial supply chain, blockchain, homomorphic encryption

Procedia PDF Downloads 136
24726 Multivariate Assessment of Mathematics Test Scores of Students in Qatar

Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski

Abstract:

Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.

Keywords: cluster analysis, education, mathematics, profiles

Procedia PDF Downloads 126
24725 Geotechnical Distress Evaluation of a Damaged Structure

Authors: Zulfiqar Ali, Umar Saleem, Muhammad Junaid, Rizwan Tahir

Abstract:

Gulzar Mahal is a heritage site located in the city of Bahawalpur, Pakistan. The site is under a process of degradation, as cracks are appearing on the walls, roofs, and floor around the building due to differential settlement. To preserve the integrity of the structure, a geotechnical distress evaluation was carried out to evaluate the causal factors and recommend remediation measures. The research involved the characterization of the problematic soil and analysis of the observed distress with respect to the geotechnical properties. Both conventional lab and field tests were used in conjunction with the unconventional techniques like; Electrical Resistivity Tomography (ERT) and FEA. The temporal, geophysical and geotechnical evaluations have concluded that the foundation soil over the past was subjected to variations in the land use, poor drainage patterns, overloading and fluctuations in groundwater table all contributing to the differential settlements manifesting in the form of the visible shear crack across the length and breadth of the building.

Keywords: differential settlement, distress evaluation, finite element analysis, Gulzar Mahal

Procedia PDF Downloads 127
24724 Evaluation of Ceres Wheat and Rice Model for Climatic Conditions in Haryana, India

Authors: Mamta Rana, K. K. Singh, Nisha Kumari

Abstract:

The simulation models with its soil-weather-plant atmosphere interacting system are important tools for assessing the crops in changing climate conditions. The CERES-Wheat & Rice vs. 4.6 DSSAT was calibrated and evaluated for one of the major producers of wheat and rice state- Haryana, India. The simulation runs were made under irrigated conditions and three fertilizer applications dose of N-P-K to estimate crop yield and other growth parameters along with the phenological development of the crop. The genetic coefficients derived by iteratively manipulating the relevant coefficients that characterize the phenological process of wheat and rice crop to the best fit match between the simulated and observed anthesis, physological maturity and final grain yield. The model validated by plotting the simulated and remote sensing derived LAI. LAI product from remote sensing provides the edge of spatial, timely and accurate assessment of crop. For validating the yield and yield components, the error percentage between the observed and simulated data was calculated. The analysis shows that the model can be used to simulate crop yield and yield components for wheat and rice cultivar under different management practices. During the validation, the error percentage was less than 10%, indicating the utility of the calibrated model for climate risk assessment in the selected region.

Keywords: simulation model, CERES-wheat and rice model, crop yield, genetic coefficient

Procedia PDF Downloads 305
24723 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 139
24722 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 142