Search results for: data sensitivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26510

Search results for: data sensitivity

25400 Evaluation of the Radiolabelled 68GA-DOTATOC Complex in Adenocarcinoma Breast Cancer

Authors: S. Zolghadri, M. Naderi, H. Yousefnia, B. Alirzapour, A. R. Jalilian, A. Ramazani

Abstract:

Nowadays, 68Ga-DOTATOC has been known as a potential agent for the detection of neuroendocrine tumours and it has indicated higher sensitivity compared with the 111In-Octeroetide. The aim of this study was to evaluate the effectiveness of this new agent in the diagnosis of adenocarcinoma breast cancer. 68Ga-DOTATOC was prepared with the radiochemical purity of higher than 98% and by the specific activity of 39.6 TBq/mmol. 37 MBq of the complex was injected intravenously into the BULB/c mice with adenocarcinoma breast cancer. PET/CT images were acquired after 30, 60 and 90 min post injection demonstrated significant accumulation in the tumour sites. Also, considerable activity was observed in the kidney and bladder as the main routs of excretion. Generally, the results showed that 68Ga-DOTATOC can be considered as a suitable complex for diagnosis of the adenocarcinoma breast cancer using PET procedure.

Keywords: adenocarcinoma breast cancer, 68Ga, octreotide, imaging

Procedia PDF Downloads 341
25399 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 370
25398 The Beneficial Effects of Inhibition of Hepatic Adaptor Protein Phosphotyrosine Interacting with PH Domain and Leucine Zipper 2 on Glucose and Cholesterol Homeostasis

Authors: Xi Chen, King-Yip Cheng

Abstract:

Hypercholesterolemia, characterized by high low-density lipoprotein cholesterol (LDL-C), raises cardiovascular events in patients with type 2 diabetes (T2D). Although several drugs, such as statin and PCSK9 inhibitors, are available for the treatment of hypercholesterolemia, they exert detrimental effects on glucose metabolism and hence increase the risk of T2D. On the other hand, the drugs used to treat T2D have minimal effect on improving the lipid profile. Therefore, there is an urgent need to develop treatments that can simultaneously improve glucose and lipid homeostasis. Adaptor protein phosphotyrosine interacting with PH domain and leucine zipper 2 (APPL2) causes insulin resistance in the liver and skeletal muscle via inhibiting insulin and adiponectin actions in animal models. Single-nucleotide polymorphisms in the APPL2 gene were associated with LDL-C, non-alcoholic fatty liver disease, and coronary artery disease in humans. The aim of this project is to investigate whether APPL2 antisense oligonucleotide (ASO) can alleviate dietary-induced T2D and hypercholesterolemia. High-fat diet (HFD) was used to induce obesity and insulin resistance in mice. GalNAc-conjugated APPL2 ASO (GalNAc-APPL2-ASO) was used to silence hepatic APPL2 expression in C57/BL6J mice selectively. Glucose, lipid, and energy metabolism were monitored. Immunoblotting and quantitative PCR analysis showed that GalNAc-APPL2-ASO treatment selectively reduced APPL2 expression in the liver instead of other tissues, like adipose tissues, kidneys, muscle, and heart. The glucose tolerance test and insulin sensitivity test revealed that GalNAc-APPL2-ASO improved glucose tolerance and insulin sensitivity progressively. Blood chemistry analysis revealed that the mice treated with GalNAc-APPL2-ASO had significantly lower circulating levels of total cholesterol and LDL cholesterol. However, there was no difference in circulating levels of high-density lipoprotein (HDL) cholesterol, triglyceride, and free fatty acid between the mice treated with GalNac-APPL2-ASO and GalNAc-Control-ASO. No obvious effect on food intake, body weight, and liver injury markers after GalNAc-APPL2-ASO treatment was found, supporting its tolerability and safety. We showed that selectively silencing hepatic APPL2 alleviated insulin resistance and hypercholesterolemia and improved energy metabolism in the dietary-induced obese mouse model, indicating APPL2 as a therapeutic target for metabolic diseases.

Keywords: APPL2, antisense oligonucleotide, hypercholesterolemia, type 2 diabetes

Procedia PDF Downloads 67
25397 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 432
25396 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 254
25395 Introduction of Electronic Health Records to Improve Data Quality in Emergency Department Operations

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In its simplest form, data quality can be defined as 'fitness for use' and it is a concept with multi-dimensions. Emergency Departments(ED) require information to treat patients and on the other hand it is the primary source of information regarding accidents, injuries, emergencies etc. Also, it is the starting point of various patient registries, databases and surveillance systems. This interventional study was carried out to improve data quality at the ED of the National Hospital of Sri Lanka (NHSL) by introducing an e health solution to improve data quality. The NHSL is the premier trauma care centre in Sri Lanka. The study consisted of three components. A research study was conducted to assess the quality of data in relation to selected five dimensions of data quality namely accuracy, completeness, timeliness, legibility and reliability. The intervention was to develop and deploy an electronic emergency department information system (eEDIS). Post assessment of the intervention confirmed that all five dimensions of data quality had improved. The most significant improvements are noticed in accuracy and timeliness dimensions.

Keywords: electronic health records, electronic emergency department information system, emergency department, data quality

Procedia PDF Downloads 274
25394 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 261
25393 A Spectrophotometric Method for the Determination of Folic Acid - A Vitamin B9 in Pharmaceutical Dosage Samples

Authors: Chand Pasha, Yasser Turki Alharbi, Krasamira Stancheva

Abstract:

A simple spectrophotometric method for the determination of folic acid in pharmaceutical dosage samples was developed. The method is based on the diazotization reaction of thiourea with sodium nitrite in acidic medium yields diazonium compounds, which is then coupled with folic acid in basic medium yields yellow coloured azo dyes. Beer’s Lamberts law is observed in the range 0.5 – 16.2 μgmL-1 at a maximum wavelength of 416nm. The molar absorbtivity, sandells sensitivity, linear regression equation and detection limit and quantitation limit were found to be 5.695×104 L mol-1cm-1, 7.752×10-3 g cm-2, y= 0.092x - 0.018, 0.687 g mL-1 and 2.083 g mL-1. This method successfully determined Folate in Pharmaceutical formulations.

Keywords: folic acid determination, spectrophotometry, diazotization, thiourea, pharmaceutical dosage samples

Procedia PDF Downloads 76
25392 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 489
25391 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 558
25390 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 168
25389 Mathematical Modeling of District Cooling Systems

Authors: Dana Alghool, Tarek ElMekkawy, Mohamed Haouari, Adel Elomari

Abstract:

District cooling systems have captured the attentions of many researchers recently due to the enormous benefits offered by such system in comparison with traditional cooling technologies. It is considered a major component of urban cities due to the significant reduction of energy consumption. This paper aims to find the optimal design and operation of district cooling systems by developing a mixed integer linear programming model to minimize the annual total system cost and satisfy the end-user cooling demand. The proposed model is experimented with different cooling demand scenarios. The results of the very high cooling demand scenario are only presented in this paper. A sensitivity analysis on different parameters of the model was performed.

Keywords: Annual Cooling Demand, Compression Chiller, Mathematical Modeling, District Cooling Systems, Optimization

Procedia PDF Downloads 201
25388 Comprehensive Analysis of RNA m5C Regulator ALYREF as a Suppressive Factor of Anti-tumor Immune and a Potential Tumor Prognostic Marker in Pan-Cancer

Authors: Yujie Yuan, Yiyang Fan, Hong Fan

Abstract:

Objective: The RNA methylation recognition protein Aly/REF export factor (ALYREF) is considered one type of “reader” protein acting as a recognition protein of m5C, has been reported involved in several biological progresses including cancer initiation and progression. 5-methylcytosine (m5C) is a conserved and prevalent RNA modification in all species, as accumulating evidence suggests its role in the promotion of tumorigenesis. It has been claimed that ALYREF mediates nuclear export of mRNA with m5C modification and regulates biological effects of cancer cells. However, the systematical regulatory pathways of ALYREF in cancer tissues have not been clarified, yet. Methods: The expression level of ALYREF in pan-cancer and their normal tissues was compared through the data acquired from The Cancer Genome Atlas (TCGA). The University of Alabama at Birmingham Cancer data analysis Portal UALCAN was used to analyze the relationship between ALYREF and clinical pathological features. The relationship between the expression level of ALYREF and prognosis of pan-cancer, and the correlation genes of ALYREF were figured out by using Gene Expression Correlation Analysis database GEPIA. Immune related genes were obtained from TISIDB (an integrated repository portal for tumor-immune system interactions). Immune-related research was conducted by using Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data (ESTIMATE) and TIMER. Results: Based on the data acquired from TCGA, ALYREF has an obviously higher-level expression in various types of cancers compared with relevant normal tissues excluding thyroid carcinoma and kidney chromophobe. The immunohistochemical images on The Human Protein Atlas showed that ALYREF can be detected in cytoplasm, membrane, but mainly located in nuclear. In addition, a higher expression level of ALYREF in tumor tissue generates a poor prognosis in majority of cancers. According to the above results, cancers with a higher expression level of ALYREF compared with normal tissues and a significant correlation between ALYREF and prognosis were selected for further analysis. By using TISIDB, we found that portion of ALYREF co-expression genes (such as BIRC5, H2AFZ, CCDC137, TK1, and PPM1G) with high Pearson correlation coefficient (PCC) were involved in anti-tumor immunity or affect resistance or sensitivity to T cell-mediated killing. Furthermore, based on the results acquired from GEPIA, there was significant correlation between ALYREF and PD-L1. It was exposed that there is a negative correlation between the expression level of ALYREF and ESTIMATE score. Conclusion: The present study indicated that ALYREF plays a vital and universal role in cancer initiation and progression of pan-cancer through regulating mitotic progression, DNA synthesis and metabolic process, and RNA processing. The correlation between ALYREF and PD-L1 implied ALYREF may affect the therapeutic effect of immunotherapy of tumor. More evidence revealed that ALYREF may play an important role in tumor immunomodulation. The correlation between ALYREF and immune cell infiltration level indicated that ALYREF can be a potential therapeutic target. Exploring the regulatory mechanism of ALYREF in tumor tissues may expose the reason for poor efficacy of immunotherapy and offer more directions of tumor treatment.

Keywords: ALYREF, pan-cancer, immunotherapy, PD-L1

Procedia PDF Downloads 71
25387 Measurement of Acoustic Loss in Nano-Layered Coating Developed for Thermal Noise Reduction

Authors: E. Cesarini, M. Lorenzini, R. Cardarelli, S. Chao, E. Coccia, V. Fafone, Y. Minenkow, I. Nardecchia, I. M. Pinto, A. Rocchi, V. Sequino, C. Taranto

Abstract:

Structural relaxation processes in optical coatings represent a fundamental limit to the sensitivity of gravitational waves detectors, MEMS, optical metrology and entangled state experiments. To face this problem, many research lines are now active, in particular the characterization of new materials and novel solutions to be employed as coatings in future gravitational wave detectors. Nano-layered coating deposition is among the most promising techniques. We report on the measurement of acoustic loss of nm-layered composites (Ti2O/SiO2), performed with the GeNS nodal suspension, compared with sputtered λ/4 thin films nowadays employed.

Keywords: mechanical measurement, nanomaterials, optical coating, thermal noise

Procedia PDF Downloads 423
25386 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 414
25385 Isogeometric Topology Optimization in Cracked Structures Design

Authors: Dongkyu Lee, Thanh Banh Thien, Soomi Shin

Abstract:

In the present study, the isogeometric topology optimization is proposed for cracked structures through using Solid Isotropic Material with Penalization (SIMP) as a design model. Design density variables defined in the variable space are used to approximate the element analysis density by the bivariate B-spline basis functions. The mathematical formulation of topology optimization problem solving minimum structural compliance is an alternating active-phase algorithm with the Gauss-Seidel version as an optimization model of optimality criteria. Stiffness and adjoint sensitivity formulations linked to strain energy of cracked structure are proposed in terms of design density variables. Numerical examples demonstrate interactions of topology optimization to structures design with cracks.

Keywords: topology optimization, isogeometric, NURBS, design

Procedia PDF Downloads 492
25384 Psychological Sense of School Membership and Coping Ability as Predictors of Multidimensional Life Satisfaction among School Children

Authors: Mary Banke Iyabo Omoniyi

Abstract:

Children in the developing countries have complex social, economic, political and environmental contexts that create a wide range of challenges for school children to surmount as they journey through school from childhood to adolescent. Many of these children have little or no personal resources and social support to confront these challenges. This study employed a descriptive research design of survey type to investigate the psychological sense of school membership and coping skills as they relate to the multidimensional life satisfaction of the school children. The sample consists of 835 school children with the age range of 7-11 years who were randomly selected from twenty schools in Ondo state, Nigeria. The instrument for data collection was a questionnaire consisting of 4 sections A, B, C and D. Section A contained items on the children’s bio-data (Age, School, father’s and mother’s educational qualifications), section B is the Multidimensional Children Life Satisfaction Questionnaire (MCLSQ) with a 20 item Likert type scale. The response format range from Never= 1 to Almost always =4. The (MCLSQ) was designed to provide profile of children satisfaction with important domains of (school, family and friends). Section C is the Psychological Sense of School Membership Questionnaire (PSSMQ) with 18 items having response format ranging from Not at true=1 to completely true=5. While section D is the Self-Report Coping Questionnaire (SRCQ) which has 16 items with response ranging from Never =1 to Always=5. The instrument has a test-retest reliability coefficient of r = 0.87 while the sectional reliabilities for MCLSQ, PSSMQ and SRCQ are 0.86, 0.92 and 0.89 respectively. The results indicated that self-report coping skill was significantly correlated with multidimensional life satisfaction (r=592;p<0.05). However, the correlation between multidimensional life satisfaction and psychological sense of school membership was not significant (r=0.038;p>0.05). The regression analysis indicated that the contribution of mother’s education and father’s education to psychological sense of school member of the children were 0.923, Adjusted R2 is 0.440 and 0.730 and Adjusted R2 is 0.446. The results also indicate that contribution of gender to psychological sense of school for male and female has R= 0.782, Adjusted R2 = 0.478 and R = 0.998, Adjusted R2 i= 0.932 respectively. In conclusion, mother’s education qualification was found to contribute more to children psychological sense of membership and multidimensional life satisfaction than father’s. The girl child was also found to have more sense of belonging to the school setting than boy child. The counselling implications and recommendations among others were geared towards positive emotional gender sensitivity with regards to the male folk. Education stakeholders are also encouraged to make the school environment more conducive and gender friendly.

Keywords: multidimensional life satisfaction, psychological sense of school, coping skills, counselling implications

Procedia PDF Downloads 310
25383 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 180
25382 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 85
25381 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 131
25380 Developing Guidelines for Public Health Nurse Data Management and Use in Public Health Emergencies

Authors: Margaret S. Wright

Abstract:

Background/Significance: During many recent public health emergencies/disasters, public health nursing data has been missing or delayed, potentially impacting the decision-making and response. Data used as evidence for decision-making in response, planning, and mitigation has been erratic and slow, decreasing the ability to respond. Methodology: Applying best practices in data management and data use in public health settings, and guided by the concepts outlined in ‘Disaster Standards of Care’ models leads to the development of recommendations for a model of best practices in data management and use in public health disasters/emergencies by public health nurses. As the ‘patient’ in public health disasters/emergencies is the community (local, regional or national), guidelines for patient documentation are incorporated in the recommendations. Findings: Using model public health nurses could better plan how to prepare for, respond to, and mitigate disasters in their communities, and better participate in decision-making in all three phases bringing public health nursing data to the discussion as part of the evidence base for decision-making.

Keywords: data management, decision making, disaster planning documentation, public health nursing

Procedia PDF Downloads 221
25379 A New Prediction Model for Soil Compression Index

Authors: D. Mohammadzadeh S., J. Bolouri Bazaz

Abstract:

This paper presents a new prediction model for compression index of fine-grained soils using multi-gene genetic programming (MGGP) technique. The proposed model relates the soil compression index to its liquid limit, plastic limit and void ratio. Several laboratory test results for fine-grained were used to develop the models. Various criteria were considered to check the validity of the model. The parametric and sensitivity analyses were performed and discussed. The MGGP method was found to be very effective for predicting the soil compression index. A comparative study was further performed to prove the superiority of the MGGP model to the existing soft computing and traditional empirical equations.

Keywords: new prediction model, compression index soil, multi-gene genetic programming, MGGP

Procedia PDF Downloads 375
25378 Coordinated Voltage Control in a Radial Distribution System

Authors: Shivarudraswamy, Anubhav Shrivastava, Lakshya Bhat

Abstract:

Distributed generation has indeed become a major area of interest in recent years. Distributed Generation can address large number of loads in a power line and hence has better efficiency over the conventional methods. However there are certain drawbacks associated with it, increase in voltage being the major one. This paper addresses the voltage control at the buses for an IEEE 30 bus system by regulating reactive power. For carrying out the analysis, the suitable location for placing distributed generators (DG) is identified through load flow analysis and seeing where the voltage profile is dipping. MATLAB programming is used to regulate the voltage at all buses within +/-5% of the base value even after the introduction of DG’s. Three methods for regulation of voltage are discussed. A sensitivity based analysis is later carried out to determine the priority among the various methods listed in the paper.

Keywords: distributed generators, distributed system, reactive power, voltage control

Procedia PDF Downloads 500
25377 Highly Sensitive, Low-Cost Oxygen Gas Sensor Based on ZnO Nanoparticles

Authors: Xin Chang, Daping Chu

Abstract:

Oxygen gas sensing technology has progressed since the last century and it has been extensively used in a wide range of applications such as controlling the combustion process by sensing the oxygen level in the exhaust gas of automobiles to ensure the catalytic converter is in a good working condition. Similar sensors are also used in industrial boilers to make the combustion process economic and environmentally friendly. Different gas sensing mechanisms have been developed: ceramic-based potentiometric equilibrium sensors and semiconductor-based sensors by oxygen absorption. In this work, we present a highly sensitive and low-cost oxygen gas sensor based on Zinc Oxide nanoparticles (average particle size of 35nm) dispersion in ethanol. The sensor is able to measure the pressure range from 103 mBar to 10-5 mBar with a sensitivity of more than 102 mA/Bar. The sensor is also erasable with heat.

Keywords: nanoparticles, oxygen, sensor, ZnO

Procedia PDF Downloads 137
25376 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 122
25375 Genodata: The Human Genome Variation Using BigData

Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta

Abstract:

Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.

Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop

Procedia PDF Downloads 259
25374 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 208
25373 Ontology for a Voice Transcription of OpenStreetMap Data: The Case of Space Apprehension by Visually Impaired Persons

Authors: Said Boularouk, Didier Josselin, Eitan Altman

Abstract:

In this paper, we present a vocal ontology of OpenStreetMap data for the apprehension of space by visually impaired people. Indeed, the platform based on produsage gives a freedom to data producers to choose the descriptors of geocoded locations. Unfortunately, this freedom, called also folksonomy leads to complicate subsequent searches of data. We try to solve this issue in a simple but usable method to extract data from OSM databases in order to send them to visually impaired people using Text To Speech technology. We focus on how to help people suffering from visual disability to plan their itinerary, to comprehend a map by querying computer and getting information about surrounding environment in a mono-modal human-computer dialogue.

Keywords: TTS, ontology, open street map, visually impaired

Procedia PDF Downloads 295
25372 Design of a Compact Herriott Cell for Heat Flux Measurement Applications

Authors: R. G. Ramírez-Chavarría, C. Sánchez-Pérez, V. Argueta-Díaz

Abstract:

In this paper we present the design of an optical device based on a Herriott multi-pass cell fabricated on a small sized acrylic slab for heat flux measurements using the deflection of a laser beam propagating inside the cell. The beam deflection is produced by the heat flux conducted to the acrylic slab due to a gradient in the refractive index. The use of a long path cell as the sensitive element in this measurement device, gives the possibility of high sensitivity within a small size device. We present the optical design as well as some experimental results in order to validate the device’s operation principle.

Keywords: heat flux, Herriott cell, optical beam deflection, thermal conductivity

Procedia PDF Downloads 656
25371 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks

Authors: Walid Fantazi

Abstract:

The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.

Keywords: WSN, indexing data, SOA, RIA, geographic information system

Procedia PDF Downloads 253