Search results for: data standardization
24945 Outlier Detection in Stock Market Data using Tukey Method and Wavelet Transform
Authors: Sadam Alwadi
Abstract:
Outlier values become a problem that frequently occurs in the data observation or recording process. Thus, the need for data imputation has become an essential matter. In this work, it will make use of the methods described in the prior work to detect the outlier values based on a collection of stock market data. In order to implement the detection and find some solutions that maybe helpful for investors, real closed price data were obtained from the Amman Stock Exchange (ASE). Tukey and Maximum Overlapping Discrete Wavelet Transform (MODWT) methods will be used to impute the detect the outlier values.Keywords: outlier values, imputation, stock market data, detecting, estimation
Procedia PDF Downloads 8124944 PEINS: A Generic Compression Scheme Using Probabilistic Encoding and Irrational Number Storage
Authors: P. Jayashree, S. Rajkumar
Abstract:
With social networks and smart devices generating a multitude of data, effective data management is the need of the hour for networks and cloud applications. Some applications need effective storage while some other applications need effective communication over networks and data reduction comes as a handy solution to meet out both requirements. Most of the data compression techniques are based on data statistics and may result in either lossy or lossless data reductions. Though lossy reductions produce better compression ratios compared to lossless methods, many applications require data accuracy and miniature details to be preserved. A variety of data compression algorithms does exist in the literature for different forms of data like text, image, and multimedia data. In the proposed work, a generic progressive compression algorithm, based on probabilistic encoding, called PEINS is projected as an enhancement over irrational number stored coding technique to cater to storage issues of increasing data volumes as a cost effective solution, which also offers data security as a secondary outcome to some extent. The proposed work reveals cost effectiveness in terms of better compression ratio with no deterioration in compression time.Keywords: compression ratio, generic compression, irrational number storage, probabilistic encoding
Procedia PDF Downloads 29424943 Iot Device Cost Effective Storage Architecture and Real-Time Data Analysis/Data Privacy Framework
Authors: Femi Elegbeleye, Omobayo Esan, Muienge Mbodila, Patrick Bowe
Abstract:
This paper focused on cost effective storage architecture using fog and cloud data storage gateway and presented the design of the framework for the data privacy model and data analytics framework on a real-time analysis when using machine learning method. The paper began with the system analysis, system architecture and its component design, as well as the overall system operations. The several results obtained from this study on data privacy model shows that when two or more data privacy model is combined we tend to have a more stronger privacy to our data, and when fog storage gateway have several advantages over using the traditional cloud storage, from our result shows fog has reduced latency/delay, low bandwidth consumption, and energy usage when been compare with cloud storage, therefore, fog storage will help to lessen excessive cost. This paper dwelt more on the system descriptions, the researchers focused on the research design and framework design for the data privacy model, data storage, and real-time analytics. This paper also shows the major system components and their framework specification. And lastly, the overall research system architecture was shown, its structure, and its interrelationships.Keywords: IoT, fog, cloud, data analysis, data privacy
Procedia PDF Downloads 9924942 Comparison of Selected Pier-Scour Equations for Wide Piers Using Field Data
Authors: Nordila Ahmad, Thamer Mohammad, Bruce W. Melville, Zuliziana Suif
Abstract:
Current methods for predicting local scour at wide bridge piers, were developed on the basis of laboratory studies and very limited scour prediction were tested with field data. Laboratory wide pier scour equation from previous findings with field data were presented. A wide range of field data were used and it consists of both live-bed and clear-water scour. A method for assessing the quality of the data was developed and applied to the data set. Three other wide pier-scour equations from the literature were used to compare the performance of each predictive method. The best-performing scour equation were analyzed using statistical analysis. Comparisons of computed and observed scour depths indicate that the equation from the previous publication produced the smallest discrepancy ratio and RMSE value when compared with the large amount of laboratory and field data.Keywords: field data, local scour, scour equation, wide piers
Procedia PDF Downloads 41324941 The Maximum Throughput Analysis of UAV Datalink 802.11b Protocol
Authors: Inkyu Kim, SangMan Moon
Abstract:
This IEEE 802.11b protocol provides up to 11Mbps data rate, whereas aerospace industry wants to seek higher data rate COTS data link system in the UAV. The Total Maximum Throughput (TMT) and delay time are studied on many researchers in the past years This paper provides theoretical data throughput performance of UAV formation flight data link using the existing 802.11b performance theory. We operate the UAV formation flight with more than 30 quad copters with 802.11b protocol. We may be predicting that UAV formation flight numbers have to bound data link protocol performance limitations.Keywords: UAV datalink, UAV formation flight datalink, UAV WLAN datalink application, UAV IEEE 802.11b datalink application
Procedia PDF Downloads 39224940 Methods for Distinction of Cattle Using Supervised Learning
Authors: Radoslav Židek, Veronika Šidlová, Radovan Kasarda, Birgit Fuerst-Waltl
Abstract:
Machine learning represents a set of topics dealing with the creation and evaluation of algorithms that facilitate pattern recognition, classification, and prediction, based on models derived from existing data. The data can present identification patterns which are used to classify into groups. The result of the analysis is the pattern which can be used for identification of data set without the need to obtain input data used for creation of this pattern. An important requirement in this process is careful data preparation validation of model used and its suitable interpretation. For breeders, it is important to know the origin of animals from the point of the genetic diversity. In case of missing pedigree information, other methods can be used for traceability of animal´s origin. Genetic diversity written in genetic data is holding relatively useful information to identify animals originated from individual countries. We can conclude that the application of data mining for molecular genetic data using supervised learning is an appropriate tool for hypothesis testing and identifying an individual.Keywords: genetic data, Pinzgau cattle, supervised learning, machine learning
Procedia PDF Downloads 55024939 Router 1X3 - RTL Design and Verification
Authors: Nidhi Gopal
Abstract:
Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.Keywords: data packets, networking, router, routing
Procedia PDF Downloads 81224938 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: web log data, web user profile, user interest, noise web data learning, machine learning
Procedia PDF Downloads 26524937 Data Mining and Knowledge Management Application to Enhance Business Operations: An Exploratory Study
Authors: Zeba Mahmood
Abstract:
The modern business organizations are adopting technological advancement to achieve competitive edge and satisfy their consumer. The development in the field of Information technology systems has changed the way of conducting business today. Business operations today rely more on the data they obtained and this data is continuously increasing in volume. The data stored in different locations is difficult to find and use without the effective implementation of Data mining and Knowledge management techniques. Organizations who smartly identify, obtain and then convert data in useful formats for their decision making and operational improvements create additional value for their customers and enhance their operational capabilities. Marketers and Customer relationship departments of firm use Data mining techniques to make relevant decisions, this paper emphasizes on the identification of different data mining and Knowledge management techniques that are applied to different business industries. The challenges and issues of execution of these techniques are also discussed and critically analyzed in this paper.Keywords: knowledge, knowledge management, knowledge discovery in databases, business, operational, information, data mining
Procedia PDF Downloads 53824936 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data
Authors: Adarsh Shroff
Abstract:
Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.Keywords: big data, map reduce, incremental processing, iterative computation
Procedia PDF Downloads 35024935 Analyzing Large Scale Recurrent Event Data with a Divide-And-Conquer Approach
Authors: Jerry Q. Cheng
Abstract:
Currently, in analyzing large-scale recurrent event data, there are many challenges such as memory limitations, unscalable computing time, etc. In this research, a divide-and-conquer method is proposed using parametric frailty models. Specifically, the data is randomly divided into many subsets, and the maximum likelihood estimator from each individual data set is obtained. Then a weighted method is proposed to combine these individual estimators as the final estimator. It is shown that this divide-and-conquer estimator is asymptotically equivalent to the estimator based on the full data. Simulation studies are conducted to demonstrate the performance of this proposed method. This approach is applied to a large real dataset of repeated heart failure hospitalizations.Keywords: big data analytics, divide-and-conquer, recurrent event data, statistical computing
Procedia PDF Downloads 16524934 Improving Sample Analysis and Interpretation Using QIAGENs Latest Investigator STR Multiplex PCR Assays with a Novel Quality Sensor
Authors: Daniel Mueller, Melanie Breitbach, Stefan Cornelius, Sarah Pakulla-Dickel, Margaretha Koenig, Anke Prochnow, Mario Scherer
Abstract:
The European STR standard set (ESS) of loci as well as the new expanded CODIS core loci set as recommended by the CODIS Core Loci Working Group, has led to a higher standardization and harmonization in STR analysis across borders. Various multiplex PCRs assays have since been developed for the analysis of these 17 ESS or 23 CODIS expansion STR markers that all meet high technical demands. However, forensic analysts are often faced with difficult STR results and the questions thereupon. What is the reason that no peaks are visible in the electropherogram? Did the PCR fail? Was the DNA concentration too low? QIAGEN’s newest Investigator STR kits contain a novel Quality Sensor (QS) that acts as internal performance control and gives useful information for evaluating the amplification efficiency of the PCR. QS indicates if the reaction has worked in general and furthermore allows discriminating between the presence of inhibitors or DNA degradation as a cause for the typical ski slope effect observed in STR profiles of such challenging samples. This information can be used to choose the most appropriate rework strategy.Based on the latest PCR chemistry called FRM 2.0, QIAGEN now provides the next technological generation for STR analysis, the Investigator ESSplex SE QS and Investigator 24plex QS Kits. The new PCR chemistry ensures robust and fast PCR amplification with improved inhibitor resistance and easy handling for a manual or automated setup. The short cycling time of 60 min reduces the duration of the total PCR analysis to make a whole workflow analysis in one day more likely. To facilitate the interpretation of STR results a smart primer design was applied for best possible marker distribution, highest concordance rates and a robust gender typing.Keywords: PCR, QIAGEN, quality sensor, STR
Procedia PDF Downloads 49524933 Adoption of Big Data by Global Chemical Industries
Authors: Ashiff Khan, A. Seetharaman, Abhijit Dasgupta
Abstract:
The new era of big data (BD) is influencing chemical industries tremendously, providing several opportunities to reshape the way they operate and help them shift towards intelligent manufacturing. Given the availability of free software and the large amount of real-time data generated and stored in process plants, chemical industries are still in the early stages of big data adoption. The industry is just starting to realize the importance of the large amount of data it owns to make the right decisions and support its strategies. This article explores the importance of professional competencies and data science that influence BD in chemical industries to help it move towards intelligent manufacturing fast and reliable. This article utilizes a literature review and identifies potential applications in the chemical industry to move from conventional methods to a data-driven approach. The scope of this document is limited to the adoption of BD in chemical industries and the variables identified in this article. To achieve this objective, government, academia, and industry must work together to overcome all present and future challenges.Keywords: chemical engineering, big data analytics, industrial revolution, professional competence, data science
Procedia PDF Downloads 8524932 Secure Multiparty Computations for Privacy Preserving Classifiers
Authors: M. Sumana, K. S. Hareesha
Abstract:
Secure computations are essential while performing privacy preserving data mining. Distributed privacy preserving data mining involve two to more sites that cannot pool in their data to a third party due to the violation of law regarding the individual. Hence in order to model the private data without compromising privacy and information loss, secure multiparty computations are used. Secure computations of product, mean, variance, dot product, sigmoid function using the additive and multiplicative homomorphic property is discussed. The computations are performed on vertically partitioned data with a single site holding the class value.Keywords: homomorphic property, secure product, secure mean and variance, secure dot product, vertically partitioned data
Procedia PDF Downloads 41224931 Analyzing the Shearing-Layer Concept Applied to Urban Green System
Authors: S. Pushkar, O. Verbitsky
Abstract:
Currently, green rating systems are mainly utilized for correctly sizing mechanical and electrical systems, which have short lifetime expectancies. In these systems, passive solar and bio-climatic architecture, which have long lifetime expectancies, are neglected. Urban rating systems consider buildings and services in addition to neighborhoods and public transportation as integral parts of the built environment. The main goal of this study was to develop a more consistent point allocation system for urban building standards by using six different lifetime shearing layers: Site, Structure, Skin, Services, Space, and Stuff, each reflecting distinct environmental damages. This shearing-layer concept was applied to internationally well-known rating systems: Leadership in Energy and Environmental Design (LEED) for Neighborhood Development, BRE Environmental Assessment Method (BREEAM) for Communities, and Comprehensive Assessment System for Building Environmental Efficiency (CASBEE) for Urban Development. The results showed that LEED for Neighborhood Development and BREEAM for Communities focused on long-lifetime-expectancy building designs, whereas CASBEE for Urban Development gave equal importance to the Building and Service Layers. Moreover, although this rating system was applied using a building-scale assessment, “Urban Area + Buildings” focuses on a short-lifetime-expectancy system design, neglecting to improve the architectural design by considering bio-climatic and passive solar aspects.Keywords: green rating system, urban community, sustainable design, standardization, shearing-layer concept, passive solar architecture
Procedia PDF Downloads 57924930 Cross Project Software Fault Prediction at Design Phase
Authors: Pradeep Singh, Shrish Verma
Abstract:
Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. The earlier we predict the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven data sets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.Keywords: software metrics, fault prediction, cross project, within project.
Procedia PDF Downloads 34424929 Comparing Emotion Recognition from Voice and Facial Data Using Time Invariant Features
Authors: Vesna Kirandziska, Nevena Ackovska, Ana Madevska Bogdanova
Abstract:
The problem of emotion recognition is a challenging problem. It is still an open problem from the aspect of both intelligent systems and psychology. In this paper, both voice features and facial features are used for building an emotion recognition system. A Support Vector Machine classifiers are built by using raw data from video recordings. In this paper, the results obtained for the emotion recognition are given, and a discussion about the validity and the expressiveness of different emotions is presented. A comparison between the classifiers build from facial data only, voice data only and from the combination of both data is made here. The need for a better combination of the information from facial expression and voice data is argued.Keywords: emotion recognition, facial recognition, signal processing, machine learning
Procedia PDF Downloads 31524928 Cryptosystems in Asymmetric Cryptography for Securing Data on Cloud at Various Critical Levels
Authors: Sartaj Singh, Amar Singh, Ashok Sharma, Sandeep Kaur
Abstract:
With upcoming threats in a digital world, we need to work continuously in the area of security in all aspects, from hardware to software as well as data modelling. The rise in social media activities and hunger for data by various entities leads to cybercrime and more attack on the privacy and security of persons. Cryptography has always been employed to avoid access to important data by using many processes. Symmetric key and asymmetric key cryptography have been used for keeping data secrets at rest as well in transmission mode. Various cryptosystems have evolved from time to time to make the data more secure. In this research article, we are studying various cryptosystems in asymmetric cryptography and their application with usefulness, and much emphasis is given to Elliptic curve cryptography involving algebraic mathematics.Keywords: cryptography, symmetric key cryptography, asymmetric key cryptography
Procedia PDF Downloads 12424927 Standardization of Propagation Techniques in Selected Native Plants of Kuwait
Authors: Laila Almulla, Narayana Bhat, Majda Suleiman, Sheena Jacob
Abstract:
Biodiversity conservation has become one of the challenging priorities to combat species extinction for many countries, including the state of Kuwait. Since native plants are better adapted to the local environment, can endure long spells of drought, withstand high soil salinity levels and provide a more natural effect to landscape projects, their use will both conserve natural resources and produce sustainable greenery. When native plants are properly blended with naturalized exotic ornamental plants in a landscape, they can improve social and cultural benefits. Screening of exotic and native plants in Kuwait during the past two decades has led to the selection of some very promising plants. Continuation of evaluation of additional native and exotic plants is essential to increase diversity of plant resources for greenery projects. Therefore, an effort was made to evaluate further native plants for their suitability for greenery applications. In the present study, various treatments were used to mass multiply selected plants using seeds to secure maximum germination. Seeds were subjected to nine treatments, and each treatment was replicated five times with ten seeds per treatment unit. After the treatment, the seeds of Zygophyllum qatarense were incubated at 30 °C, three lights for 12 h, at 40% humidity; where as the seeds of Haloxylon salicornicum were incubated at 22 °C with continuous light, at 40% humidity. Soaking in 250-ppm GA3 resulted in highest germination percentage of 20% in Zygophyllum qatarense and, Soaking in 500-ppm GA3 resulted in 6% germination in Haloxylon salicornicum. Germination of the viable seeds is influenced by various external and internal factors, seed must not be in a state of dormancy and the environmental requirements for germination of that seed must be met, before germination can occur.Keywords: landscape, native plants, revegetation, seed germination
Procedia PDF Downloads 52624926 Data Recording for Remote Monitoring of Autonomous Vehicles
Authors: Rong-Terng Juang
Abstract:
Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.Keywords: autonomous vehicle, data compression, remote monitoring, controller area networks (CAN), Lidar
Procedia PDF Downloads 16324925 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory
Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan
Abstract:
Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.Keywords: data fusion, Dempster-Shafer theory, data mining, event detection
Procedia PDF Downloads 41024924 Legal Issues of Collecting and Processing Big Health Data in the Light of European Regulation 679/2016
Authors: Ioannis Iglezakis, Theodoros D. Trokanas, Panagiota Kiortsi
Abstract:
This paper aims to explore major legal issues arising from the collection and processing of Health Big Data in the light of the new European secondary legislation for the protection of personal data of natural persons, placing emphasis on the General Data Protection Regulation 679/2016. Whether Big Health Data can be characterised as ‘personal data’ or not is really the crux of the matter. The legal ambiguity is compounded by the fact that, even though the processing of Big Health Data is premised on the de-identification of the data subject, the possibility of a combination of Big Health Data with other data circulating freely on the web or from other data files cannot be excluded. Another key point is that the application of some provisions of GPDR to Big Health Data may both absolve the data controller of his legal obligations and deprive the data subject of his rights (e.g., the right to be informed), ultimately undermining the fundamental right to the protection of personal data of natural persons. Moreover, data subject’s rights (e.g., the right not to be subject to a decision based solely on automated processing) are heavily impacted by the use of AI, algorithms, and technologies that reclaim health data for further use, resulting in sometimes ambiguous results that have a substantial impact on individuals. On the other hand, as the COVID-19 pandemic has revealed, Big Data analytics can offer crucial sources of information. In this respect, this paper identifies and systematises the legal provisions concerned, offering interpretative solutions that tackle dangers concerning data subject’s rights while embracing the opportunities that Big Health Data has to offer. In addition, particular attention is attached to the scope of ‘consent’ as a legal basis in the collection and processing of Big Health Data, as the application of data analytics in Big Health Data signals the construction of new data and subject’s profiles. Finally, the paper addresses the knotty problem of role assignment (i.e., distinguishing between controller and processor/joint controllers and joint processors) in an era of extensive Big Health data sharing. The findings are the fruit of a current research project conducted by a three-member research team at the Faculty of Law of the Aristotle University of Thessaloniki and funded by the Greek Ministry of Education and Religious Affairs.Keywords: big health data, data subject rights, GDPR, pandemic
Procedia PDF Downloads 12924923 Adaptive Data Approximations Codec (ADAC) for AI/ML-based Cyber-Physical Systems
Authors: Yong-Kyu Jung
Abstract:
The fast growth in information technology has led to de-mands to access/process data. CPSs heavily depend on the time of hardware/software operations and communication over the network (i.e., real-time/parallel operations in CPSs (e.g., autonomous vehicles). Since data processing is an im-portant means to overcome the issue confronting data management, reducing the gap between the technological-growth and the data-complexity and channel-bandwidth. An adaptive perpetual data approximation method is intro-duced to manage the actual entropy of the digital spectrum. An ADAC implemented as an accelerator and/or apps for servers/smart-connected devices adaptively rescales digital contents (avg.62.8%), data processing/access time/energy, encryption/decryption overheads in AI/ML applications (facial ID/recognition).Keywords: adaptive codec, AI, ML, HPC, cyber-physical, cybersecurity
Procedia PDF Downloads 7824922 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data
Authors: Sašo Pečnik, Borut Žalik
Abstract:
This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization
Procedia PDF Downloads 30824921 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR
Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman
Abstract:
Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography
Procedia PDF Downloads 49624920 Estimating Destinations of Bus Passengers Using Smart Card Data
Authors: Hasik Lee, Seung-Young Kho
Abstract:
Nowadays, automatic fare collection (AFC) system is widely used in many countries. However, smart card data from many of cities does not contain alighting information which is necessary to build OD matrices. Therefore, in order to utilize smart card data, destinations of passengers should be estimated. In this paper, kernel density estimation was used to forecast probabilities of alighting stations of bus passengers and applied to smart card data in Seoul, Korea which contains boarding and alighting information. This method was also validated with actual data. In some cases, stochastic method was more accurate than deterministic method. Therefore, it is sufficiently accurate to be used to build OD matrices.Keywords: destination estimation, Kernel density estimation, smart card data, validation
Procedia PDF Downloads 35224919 Evaluated Nuclear Data Based Photon Induced Nuclear Reaction Model of GEANT4
Authors: Jae Won Shin
Abstract:
We develop an evaluated nuclear data based photonuclear reaction model of GEANT4 for a more accurate simulation of photon-induced neutron production. The evaluated photonuclear data libraries from the ENDF/B-VII.1 are taken as input. Incident photon energies up to 140 MeV which is the threshold energy for the pion production are considered. For checking the validity of the use of the data-based model, we calculate the photoneutron production cross-sections and yields and compared them with experimental data. The results obtained from the developed model are found to be in good agreement with the experimental data for (γ,xn) reactions.Keywords: ENDF/B-VII.1, GEANT4, photoneutron, photonuclear reaction
Procedia PDF Downloads 27424918 Optimizing Communications Overhead in Heterogeneous Distributed Data Streams
Authors: Rashi Bhalla, Russel Pears, M. Asif Naeem
Abstract:
In this 'Information Explosion Era' analyzing data 'a critical commodity' and mining knowledge from vertically distributed data stream incurs huge communication cost. However, an effort to decrease the communication in the distributed environment has an adverse influence on the classification accuracy; therefore, a research challenge lies in maintaining a balance between transmission cost and accuracy. This paper proposes a method based on Bayesian inference to reduce the communication volume in a heterogeneous distributed environment while retaining prediction accuracy. Our experimental evaluation reveals that a significant reduction in communication can be achieved across a diverse range of dataset types.Keywords: big data, bayesian inference, distributed data stream mining, heterogeneous-distributed data
Procedia PDF Downloads 16124917 A Paradigm Shift in the Cost of Illness of Type 2 Diabetes Mellitus over a Decade in South India: A Prevalence Based Study
Authors: Usha S. Adiga, Sachidanada Adiga
Abstract:
Introduction: Diabetes Mellitus (DM) is one of the most common non-communicable diseases which imposes a large economic burden on the global health-care system. Cost of illness studies in India have assessed the health care cost of DM, but have certain limitations due to lack of standardization of the methods used, improper documentation of data, lack of follow up, etc. The objective of the study was to estimate the cost of illness of uncomplicated versus complicated type 2 diabetes mellitus in Coastal Karnataka, India. The study also aimed to find out the trend of cost of illness of the disease over a decade. Methodology: A prevalence based bottom-up approach study was carried out in two tertiary care hospitals located in Coastal Karnataka after ethical approval. Direct Medical costs like annual laboratory costs, pharmacy cost, consultation charges, hospital bed charges, surgical /intervention costs of 238 diabetics and 340 diabetic patients respectively from two hospitals were obtained from the medical record sections. Patients were divided into six groups, uncomplicated diabetes, diabetic retinopathy(DR), nephropathy(DN), neuropathy(DNeu), diabetic foot(DF), and ischemic heart disease (IHD). Different costs incurred in 2008 and 2017 in these groups were compared, to study the trend of cost of illness. Kruskal Wallis test followed by Dunn’s test were used to compare median costs between the groups and Spearman's correlation test was used for correlation studies. Results: Uncomplicated patients had significantly lower costs (p <0.0001) compared to other groups. Patients with IHD had highest Medical expenses (p < 0.0001), followed by DN and DF (p < 0.0001 ). Annual medical costs incurred were 1.8, 2.76, 2.77, 1.76, and 4.34 times higher in retinopathy, nephropathy, diabetic foot, neuropathy and IHD patients as compared to the cost incurred in managing uncomplicated diabetics. Other costs also showed a similar pattern of rising. A positive correlation was observed between the costs incurred and duration of diabetes, a negative correlation between the glycemic status and cost incurred. The cost incurred in the management of DM in 2017 was found to be elevated 1.4 - 2.7 times when compared to that in 2008. Conclusion: It is evident from the study that the economic burden due to diabetes mellitus is substantial. It poses a significant financial burden on the healthcare system, individual and society as a whole. There is a need for the strategies to achieve optimal glycemic control and operationalize regular and early screening methods for complications so as to reduce the burden of the disease.Keywords: COI, diabetes mellitus, a bottom up approach, economics
Procedia PDF Downloads 11624916 Data Privacy: Stakeholders’ Conflicts in Medical Internet of Things
Authors: Benny Sand, Yotam Lurie, Shlomo Mark
Abstract:
Medical Internet of Things (MIoT), AI, and data privacy are linked forever in a gordian knot. This paper explores the conflicts of interests between the stakeholders regarding data privacy in the MIoT arena. While patients are at home during healthcare hospitalization, MIoT can play a significant role in improving the health of large parts of the population by providing medical teams with tools for collecting data, monitoring patients’ health parameters, and even enabling remote treatment. While the amount of data handled by MIoT devices grows exponentially, different stakeholders have conflicting understandings and concerns regarding this data. The findings of the research indicate that medical teams are not concerned by the violation of data privacy rights of the patients' in-home healthcare, while patients are more troubled and, in many cases, are unaware that their data is being used without their consent. MIoT technology is in its early phases, and hence a mixed qualitative and quantitative research approach will be used, which will include case studies and questionnaires in order to explore this issue and provide alternative solutions.Keywords: MIoT, data privacy, stakeholders, home healthcare, information privacy, AI
Procedia PDF Downloads 102