Search results for: panel data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25260

Search results for: panel data

24330 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 59
24329 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 366
24328 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 424
24327 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 251
24326 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 267
24325 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 250
24324 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 483
24323 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 549
24322 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 159
24321 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 403
24320 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 174
24319 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 80
24318 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 126
24317 Globalisation, Growth and Sustainability in Sub-Saharan Africa

Authors: Ourvashi Bissoon

Abstract:

Sub-Saharan Africa in addition to being resource rich is increasingly being seen as having a huge growth potential and as a result, is increasingly attracting MNEs on its soil. To empirically assess the effectiveness of GDP in tracking sustainable resource use and the role played by MNEs in Sub-Saharan Africa, a panel data analysis has been undertaken for 32 countries over thirty-five years. The time horizon spans the period 1980-2014 to reflect the evolution from before the publication of the pioneering Brundtland report on sustainable development to date. Multinationals’ presence is proxied by the level of FDI stocks. The empirical investigation first focuses on the impact of trade openness and MNE presence on the traditional measure of economic growth namely the GDP growth rate, and then on the genuine savings (GS) rate, a measure of weak sustainability developed by the World Bank, which assumes the substitutability between different forms of capital and finally, the impact on the adjusted Net National Income (aNNI), a measure of green growth which caters for the depletion of natural resources is examined. For countries with significant exhaustible natural resources and important foreign investor presence, the adjusted net national income (aNNI) can be a better indicator of economic performance than GDP growth (World Bank, 2010). The issue of potential endogeneity and reverse causality is also addressed in addition to robustness tests. The findings indicate that FDI and openness contribute significantly and positively to the GDP growth of the countries in the sample; however there is a threshold level of institutional quality below which FDI has a negative impact on growth. When the GDP growth rate is substituted for the GS rate, a natural resource curse becomes evident. The rents being generated from the exploitation of natural resources are not being re-invested into other forms of capital namely human and physical capital. FDI and trade patterns may be setting the economies in the sample on a unsustainable path of resource depletion. The resource curse is confirmed when utilising the aNNI as well, thus implying that GDP growth measure may not be a reliable to capture sustainable development.

Keywords: FDI, sustainable development, genuine savings, sub-Saharan Africa

Procedia PDF Downloads 211
24316 Developing Guidelines for Public Health Nurse Data Management and Use in Public Health Emergencies

Authors: Margaret S. Wright

Abstract:

Background/Significance: During many recent public health emergencies/disasters, public health nursing data has been missing or delayed, potentially impacting the decision-making and response. Data used as evidence for decision-making in response, planning, and mitigation has been erratic and slow, decreasing the ability to respond. Methodology: Applying best practices in data management and data use in public health settings, and guided by the concepts outlined in ‘Disaster Standards of Care’ models leads to the development of recommendations for a model of best practices in data management and use in public health disasters/emergencies by public health nurses. As the ‘patient’ in public health disasters/emergencies is the community (local, regional or national), guidelines for patient documentation are incorporated in the recommendations. Findings: Using model public health nurses could better plan how to prepare for, respond to, and mitigate disasters in their communities, and better participate in decision-making in all three phases bringing public health nursing data to the discussion as part of the evidence base for decision-making.

Keywords: data management, decision making, disaster planning documentation, public health nursing

Procedia PDF Downloads 216
24315 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 116
24314 Genodata: The Human Genome Variation Using BigData

Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta

Abstract:

Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.

Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop

Procedia PDF Downloads 253
24313 Ontology for a Voice Transcription of OpenStreetMap Data: The Case of Space Apprehension by Visually Impaired Persons

Authors: Said Boularouk, Didier Josselin, Eitan Altman

Abstract:

In this paper, we present a vocal ontology of OpenStreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately, this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.

Keywords: TTS, ontology, open street map, visually impaired

Procedia PDF Downloads 293
24312 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks

Authors: Walid Fantazi

Abstract:

The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.

Keywords: WSN, indexing data, SOA, RIA, geographic information system

Procedia PDF Downloads 248
24311 Prediction of Marine Ecosystem Changes Based on the Integrated Analysis of Multivariate Data Sets

Authors: Prozorkevitch D., Mishurov A., Sokolov K., Karsakov L., Pestrikova L.

Abstract:

The current body of knowledge about the marine environment and the dynamics of marine ecosystems includes a huge amount of heterogeneous data collected over decades. It generally includes a wide range of hydrological, biological and fishery data. Marine researchers collect these data and analyze how and why the ecosystem changes from past to present. Based on these historical records and linkages between the processes it is possible to predict future changes. Multivariate analysis of trends and their interconnection in the marine ecosystem may be used as an instrument for predicting further ecosystem evolution. A wide range of information about the components of the marine ecosystem for more than 50 years needs to be used to investigate how these arrays can help to predict the future.

Keywords: barents sea ecosystem, abiotic, biotic, data sets, trends, prediction

Procedia PDF Downloads 112
24310 Optical Fiber Data Throughput in a Quantum Communication System

Authors: Arash Kosari, Ali Araghi

Abstract:

A mathematical model for an optical-fiber communication channel is developed which results in an expression that calculates the throughput and loss of the corresponding link. The data are assumed to be transmitted by using of separate photons with different polarizations. The derived model also shows the dependency of data throughput with length of the channel and depolarization factor. It is observed that absorption of photons affects the throughput in a more intensive way in comparison with that of depolarization. Apart from that, the probability of depolarization and the absorption of radiated photons are obtained.

Keywords: absorption, data throughput, depolarization, optical fiber

Procedia PDF Downloads 284
24309 Metabolic Syndrome among Some Originates of Mbo Ethnic Group Living in Yaounde-Cameroon

Authors: Mandob Enyegue Damaris, Oko Ndjollo Viviane

Abstract:

The prevalence of Metabolic Syndrome is increasing throughout the world. The etiology of the metabolic syndrome is dependent on different factors such as ethnic group. This study aimed to evaluate the metabolic syndrome among Mbo ethnic group people leaving in Yaounde, Cameroon. The study conducted on the hundred and thirty two people 40 men and 92 women aged between 18-60 years who were referred to the Andre Fouda Medical Fundation in Yaounde. Metabolic syndrome was diagnosed using Adult Treatment Panel-III (A.T.P-III) 2001 guidelines. The mean of age, high fasting blood glucose, triglycerides levels and total cholesterol levels were significantly (P<0.05) higher in women with metabolic syndrome. High blood pressure level (56.80%), high fasting glucose (20.45%) and high waist circumference (10.60%) were respectively the most frequent characteristics in comparison to others metabolic components. The overall prevalence of MetS was (4.55%) and higher in women (3.03%) than in men (1.52%). The prevalence of MetS is low in originates of Mbo ethnic group of Yaounde. High blood pressure is the most common abnormality.

Keywords: individual components, metabolic syndrome, Mbo ethnic group, Yaounde-Cameroon

Procedia PDF Downloads 774
24308 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 445
24307 Offshore Outsourcing: Global Data Privacy Controls and International Compliance Issues

Authors: Michelle J. Miller

Abstract:

In recent year, there has been a rise of two emerging issues that impact the global employment and business market that the legal community must review closer: offshore outsourcing and data privacy. These two issues intersect because employment opportunities are shifting due to offshore outsourcing and some States, like the United States, anti-outsourcing legislation has been passed or presented to retain jobs within the country. In addition, the legal requirements to retain the privacy of data as a global employer extends to employees and third party service provides, including services outsourced to offshore locations. For this reason, this paper will review the intersection of these two issues with a specific focus on data privacy.

Keywords: outsourcing, data privacy, international compliance, multinational corporations

Procedia PDF Downloads 406
24306 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: data grid, data replication, simulation, replica selection, replica placement

Procedia PDF Downloads 258
24305 Evaluation of Satellite and Radar Rainfall Product over Seyhan Plain

Authors: Kazım Kaba, Erdem Erdi, M. Akif Erdoğan, H. Mustafa Kandırmaz

Abstract:

Rainfall is crucial data source for very different discipline such as agriculture, hydrology and climate. Therefore rain rate should be known well both spatial and temporal for any area. Rainfall is measured by using rain-gauge at meteorological ground stations traditionally for many years. At the present time, rainfall products are acquired from radar and satellite images with a temporal and spatial continuity. In this study, we investigated the accuracy of these rainfall data according to rain-gauge data. For this purpose, we used Adana-Hatay radar hourly total precipitation product (RN1) and Meteosat convective rainfall rate (CRR) product over Seyhan plain. We calculated daily rainfall values from RN1 and CRR hourly precipitation products. We used the data of rainy days of four stations located within range of the radar from October 2013 to November 2015. In the study, we examined two rainfall data over Seyhan plain and the correlation between the rain-gauge data and two raster rainfall data was observed lowly.

Keywords: meteosat, radar, rainfall, rain-gauge, Turkey

Procedia PDF Downloads 321
24304 Spatial Data Mining by Decision Trees

Authors: Sihem Oujdi, Hafida Belbachir

Abstract:

Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.

Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining

Procedia PDF Downloads 609
24303 Trafficking of Women and Children and Solutions to Combat It: The Case of Nigeria

Authors: Olatokunbo Yakeem

Abstract:

Human trafficking is a crime against gross violations of human rights. Trafficking in persons is a severe socio-economic dilemma that affects the national and international dimensions. Human trafficking or modern-day-slavery emanated from slavery, and it has been in existence before the 6ᵗʰ century. Today, no country is exempted from dehumanizing human beings, and as a result, it has been an international issue. The United Nations (UN) presented the International Protocol to fight human trafficking worldwide, which brought about the international definition of human trafficking. The protocol is to prevent, suppress, and punish trafficking in persons, especially women and children. The trafficking protocol has a link with transnational organised crime rather than migration. Over a hundred and fifty countries nationwide have enacted their criminal and panel code trafficking legislation from the UN trafficking protocol. Sex trafficking is the most common type of exploitation of women and children. Other forms of this crime involve exploiting vulnerable victims through forced labour, child involvement in warfare, domestic servitude, debt bondage, and organ removal for transplantation. Trafficking of women and children into sexual exploitation represents the highest form of human trafficking than other types of exploitation. Trafficking of women and children can either happen internally or across the border. It affects all kinds of people, regardless of their race, social class, culture, religion, and education levels. However, it is more of a gender-based issue against females. Furthermore, human trafficking can lead to life-threatening infections, mental disorders, lifetime trauma, and even the victim's death. The study's significance is to explore why the root causes of women and children trafficking in Nigeria are based around poverty, entrusting children in the hands of relatives and friends, corruption, globalization, weak legislation, and ignorance. The importance of this study is to establish how the national, regional, and international organisations are using the 3P’s Protection, Prevention, and Prosecution) to tackle human trafficking. The methodology approach for this study will be a qualitative paradigm. The rationale behind this selection is that the qualitative method will identify the phenomenon and interpret the findings comprehensively. The data collection will take the form of semi-structured in-depth interviews through telephone and email. The researcher will use a descriptive thematic analysis to analyse the data by using complete coding. In summary, this study aims to recommend to the Nigerian federal government to include human trafficking as a subject in their educational curriculum for early intervention to prevent children from been coerced by criminal gangs. And the research aims to find the root causes of women and children trafficking. Also, to look into the effectiveness of the strategies in place to eradicate human trafficking globally. In the same vein, the research objective is to investigate how the anti-trafficking bodies such as law enforcement and NGOs collaborate to tackle the upsurge in human trafficking.

Keywords: children, Nigeria, trafficking, women

Procedia PDF Downloads 180
24302 Clinical and Molecular Characterization of Ichthyosis at King Abdulaziz Medical City, Riyadh KSA

Authors: Reema K. AlEssa, Sahar Alshomer, Abdullah Alfaleh, Sultan ALkhenaizan, Mohammed Albalwi

Abstract:

Ichthyosis is a disorder of abnormal keratinization, characterized by excessive scaling, and consists of more than twenty subtypes varied in severity, mode of inheritance, and the genes involved. There is insufficient data in the literature about the epidemiology and characteristics of ichthyosis locally. Our aim is to identify the histopathological features and genetic profile of ichthyosis. Method: It is an observational retrospective case series study conducted in March 2020, included all patients who were diagnosed with Ichthyosis and confirmed by histological and molecular findings over the last 20 years in King Abdulaziz Medical City (KAMC), Riyadh, Saudi Arabia. Molecular analysis was performed by testing genomic DNA and checking genetic variations using the AmpliSeq panel. All disease-causing variants were checked against HGMD, ClinVar, Genome Aggregation Database (gnomAD), and Exome Aggregation Consortium (ExAC) databases. Result: A total of 60 cases of Ichthyosis were identified with a mean age of 13 ± 9.2. There is an almost equal distribution between female patients 29 (48%) and males 31 (52%). The majority of them were Saudis, 94%. More than half of patients presented with general scaling 33 (55%), followed by dryness and coarse skin 19 (31.6%) and hyperlinearity 5 (8.33%). Family history and history of consanguinity were seen in 26 (43.3% ), 13 (22%), respectively. History of colloidal babies was found in 6 (10%) cases of ichthyosis. The most frequent genes were ALOX12B, ALOXE3, CERS3, CYP4F22, DOLK, FLG2, GJB2, PNPLA1, SLC27A4, SPINK5, STS, SUMF1, TGM1, TGM5, VPS33B. Most frequent variations were detected in CYP4F22 in 16 cases (26.6%) followed by ALOXE3 6 (10%) and STS 6 (10%) then TGM1 5 (8.3) and ALOX12B 5 (8.3). The analysis of molecular genetic identified 23 different genetic variations in the genes of ichthyosis, of which 13 were novel mutations. Homozygous mutations were detected in the majority of ichthyosis cases, 54 (90%), and only 1 case was heterozygous. Few cases, 4 (6.6%) had an unknown type of ichthyosis with a negative genetic result. Conclusion: 13 novel mutations were discovered. Also, about half of ichthyosis patients had a positive history of consanguinity.

Keywords: ichthyosis, genetic profile, molecular characterization, congenital ichthyosis

Procedia PDF Downloads 191
24301 Measurement of Influence of the COVID-19 Pandemic on Efficiency of Japan’s Railway Companies

Authors: Hideaki Endo, Mika Goto

Abstract:

The global outbreak of the COVID-19 pandemic has seriously affected railway businesses. The number of railway passengers decreased due to the decline in the number of commuters and business travelers to avoid crowded trains and a sharp drop in inbound tourists visiting Japan. This has affected not only railway businesses but also related businesses, including hotels, leisure businesses, and retail businesses at station buildings. In 2021, the companies were divided into profitable and loss-making companies. This division suggests that railway companies, particularly loss-making companies, needed to decrease operational inefficiency. To measure the impact of COVID-19 and discuss the sustainable management strategies of railway companies, we examine the cost inefficiency of Japanese listed railway companies by applying stochastic frontier analysis (SFA) to their operational and financial data. First, we employ the stochastic frontier cost function approach to measure inefficiency. The cost frontier function is formulated as a Cobb–Douglas type, and we estimated parameters and variables for inefficiency. This study uses panel data comprising 26 Japanese-listed railway companies from 2005 to 2020. This period includes several events deteriorating the business environment, such as the financial crisis from 2007 to 2008 and the Great East Japan Earthquake of 2011, and we compare those impacts with those of the COVID-19 pandemic after 2020. Second, we identify the characteristics of the best-practice railway companies and examine the drivers of cost inefficiencies. Third, we analyze the factors influencing cost inefficiency by comparing the profiles of the top 10 railway companies and others before and during the pandemic. Finally, we examine the relationship between cost inefficiency and the implementation of efficiency measures for each railway company. We obtained the following four findings. First, most Japanese railway companies showed the lowest cost inefficiency (most efficient) in 2014 and the highest in 2020 (least efficient) during the COVID-19 pandemic. The second worst occurred in 2009 when it was affected by the financial crisis. However, we did not observe a significant impact of the 2011 Great East Japan Earthquake. This is because no railway company was influenced by the earthquake in this operating area, except for JR-EAST. Second, the best-practice railway companies are KEIO and TOKYU. The main reason for their good performance is that both operate in and near the Tokyo metropolitan area, which is densely populated. Third, we found that non-best-practice companies had a larger decrease in passenger kilometers than best-practice companies. This indicates that passengers made fewer long-distance trips because they refrained from inter-prefectural travel during the pandemic. Finally, we found that companies that implement more efficiency improvement measures had higher cost efficiency and they effectively used their customer databases through proactive DX investments in marketing and asset management.

Keywords: COVID-19 pandemic, stochastic frontier analysis, railway sector, cost efficiency

Procedia PDF Downloads 67