Search results for: measured data.
7431 Physiological Action of Anthraquinone-Containing Preparations
Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina, Evgenii N. Kojaev
Abstract:
In review the generalized data about biological activity of anthraquinone-containing plants and specimens on their basis is presented. Data of traditional medicine, results of bioscreening and clinical researches of specimens are analyzed.
Keywords: Anthraquinones, physiologically active substances, phytopreparation, Ramon.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20707430 Modified Data Mining Approach for Defective Diagnosis in Hard Disk Drive Industry
Authors: S. Soommat, S. Patamatamkul, T. Prempridi, M. Sritulyachot, P. Ineure, S. Yimman
Abstract:
Currently, slider process of Hard Disk Drive Industry become more complex, defective diagnosis for yield improvement becomes more complicated and time-consumed. Manufacturing data analysis with data mining approach is widely used for solving that problem. The existing mining approach from combining of the KMean clustering, the machine oriented Kruskal-Wallis test and the multivariate chart were applied for defective diagnosis but it is still be a semiautomatic diagnosis system. This article aims to modify an algorithm to support an automatic decision for the existing approach. Based on the research framework, the new approach can do an automatic diagnosis and help engineer to find out the defective factors faster than the existing approach about 50%.Keywords: Slider process, Defective diagnosis and Data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11997429 Authentication and Data Hiding Using a Reversible ROI-based Watermarking Scheme for DICOM Images
Authors: Osamah M. Al-Qershi, Khoo Bee Ee
Abstract:
In recent years image watermarking has become an important research area in data security, confidentiality and image integrity. Many watermarking techniques were proposed for medical images. However, medical images, unlike most of images, require extreme care when embedding additional data within them because the additional information must not affect the image quality and readability. Also the medical records, electronic or not, are linked to the medical secrecy, for that reason, the records must be confidential. To fulfill those requirements, this paper presents a lossless watermarking scheme for DICOM images. The proposed a fragile scheme combines two reversible techniques based on difference expansion for patient's data hiding and protecting the region of interest (ROI) with tamper detection and recovery capability. Patient's data are embedded into ROI, while recovery data are embedded into region of non-interest (RONI). The experimental results show that the original image can be exactly extracted from the watermarked one in case of no tampering. In case of tampered ROI, tampered area can be localized and recovered with a high quality version of the original area.Keywords: DICOM, reversible, ROI-based, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17197428 Continual Learning Using Data Generation for Hyperspectral Remote Sensing Scene Classification
Authors: Samiah Alammari, Nassim Ammour
Abstract:
When providing a massive number of tasks successively to a deep learning process, a good performance of the model requires preserving the previous tasks data to retrain the model for each upcoming classification. Otherwise, the model performs poorly due to the catastrophic forgetting phenomenon. To overcome this shortcoming, we developed a successful continual learning deep model for remote sensing hyperspectral image regions classification. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning, and the second module tries to learn how to replicate the data of the previous tasks by discovering the latent data structure of the new task dataset. We conduct experiments on hyperspectral image (HSI) dataset on Indian Pines. The results confirm the capability of the proposed method.
Keywords: Continual learning, data reconstruction, remote sensing, hyperspectral image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327427 Stego Machine – Video Steganography using Modified LSB Algorithm
Authors: Mritha Ramalingam
Abstract:
Computer technology and the Internet have made a breakthrough in the existence of data communication. This has opened a whole new way of implementing steganography to ensure secure data transfer. Steganography is the fine art of hiding the information. Hiding the message in the carrier file enables the deniability of the existence of any message at all. This paper designs a stego machine to develop a steganographic application to hide data containing text in a computer video file and to retrieve the hidden information. This can be designed by embedding text file in a video file in such away that the video does not loose its functionality using Least Significant Bit (LSB) modification method. This method applies imperceptible modifications. This proposed method strives for high security to an eavesdropper-s inability to detect hidden information.Keywords: Data hiding, LSB, Stego machine, VideoSteganography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42687426 Data Projects for “Social Good”: Challenges and Opportunities
Authors: Mikel Niño, Roberto V. Zicari, Todor Ivanov, Kim Hee, Naveed Mushtaq, Marten Rosselli, Concha Sánchez-Ocaña, Karsten Tolle, José Miguel Blanco, Arantza Illarramendi, Jörg Besier, Harry Underwood
Abstract:
One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.Keywords: Data-Driven projects, humanitarian operations, personal and sensitive data, social good, stakeholders analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17937425 Measuring E-Learning Effectiveness Using a Three-Way Comparison
Authors: Matthew Montebello
Abstract:
The way e-learning effectiveness has been notoriously measured within an academic setting is by comparing the e-learning medium to the traditional face-to-face teaching methodology. In this paper, a simple yet innovative comparison methodology is introduced, whereby the effectiveness of next generation e-learning systems are assessed in contrast not only to the face-to-face mode, but also to the classical e-learning modality. Ethical and logistical issues are also discussed, as this three-way approach to compare teaching methodologies was applied and documented in a real empirical study within a higher education institution.Keywords: E-learning effectiveness, higher education, teaching modality comparison.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13947424 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.
Keywords: Structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4177423 A Multi-Feature Deep Learning Algorithm for Urban Traffic Classification with Limited Labeled Data
Authors: Rohan Putatunda, Aryya Gangopadhyay
Abstract:
Acoustic sensors, if embedded in smart street lights, can help in capturing the activities (car honking, sirens, events, traffic, etc.) in cities. Needless to say, the acoustic data from such scenarios are complex due to multiple audio streams originating from different events, and when decomposed to independent signals, the amount of retrieved data volume is small in quantity which is inadequate to train deep neural networks. So, in this paper, we address the two challenges: a) separating the mixed signals, and b) developing an efficient acoustic classifier under data paucity. So, to address these challenges, we propose an architecture with supervised deep learning, where the initial captured mixed acoustics data are analyzed with Fast Fourier Transformation (FFT), followed by filtering the noise from the signal, and then decomposed to independent signals by fast independent component analysis (Fast ICA). To address the challenge of data paucity, we propose a multi feature-based deep neural network with high performance that is reflected in our experiments when compared to the conventional convolutional neural network (CNN) and multi-layer perceptron (MLP).
Keywords: FFT, ICA, vehicle classification, multi-feature DNN, CNN, MLP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4327422 An Educational Data Mining System for Advising Higher Education Students
Authors: Heba Mohammed Nagy, Walid Mohamed Aly, Osama Fathy Hegazy
Abstract:
Educational data mining is a specific data mining field applied to data originating from educational environments, it relies on different approaches to discover hidden knowledge from the available data. Among these approaches are machine learning techniques which are used to build a system that acquires learning from previous data. Machine learning can be applied to solve different regression, classification, clustering and optimization problems.
In our research, we propose a “Student Advisory Framework” that utilizes classification and clustering to build an intelligent system. This system can be used to provide pieces of consultations to a first year university student to pursue a certain education track where he/she will likely succeed in, aiming to decrease the high rate of academic failure among these students. A real case study in Cairo Higher Institute for Engineering, Computer Science and Management is presented using real dataset collected from 2000−2012.The dataset has two main components: pre-higher education dataset and first year courses results dataset. Results have proved the efficiency of the suggested framework.
Keywords: Classification, Clustering, Educational Data Mining (EDM), Machine Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52137421 Auto Classification for Search Intelligence
Authors: Lilac A. E. Al-Safadi
Abstract:
This paper proposes an auto-classification algorithm of Web pages using Data mining techniques. We consider the problem of discovering association rules between terms in a set of Web pages belonging to a category in a search engine database, and present an auto-classification algorithm for solving this problem that are fundamentally based on Apriori algorithm. The proposed technique has two phases. The first phase is a training phase where human experts determines the categories of different Web pages, and the supervised Data mining algorithm will combine these categories with appropriate weighted index terms according to the highest supported rules among the most frequent words. The second phase is the categorization phase where a web crawler will crawl through the World Wide Web to build a database categorized according to the result of the data mining approach. This database contains URLs and their categories.Keywords: Information Processing on the Web, Data Mining, Document Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16197420 Coalescence of Insulin and Triglyceride/High Density Lipoprotein Cholesterol Ratio for the Derivation of a Laboratory Index to Predict Metabolic Syndrome in Morbid Obese Children
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Morbid obesity is a health threatening condition particularly in children. Generally, it leads to the development of metabolic syndrome (MetS) characterized by central obesity, elevated fasting blood glucose (FBG), triglyceride (TRG), blood pressure values and suppressed high density lipoprotein cholesterol (HDL-C) levels. However, some ambiguities exist during the diagnosis of MetS in children below 10 years of age. Therefore, clinicians are in the need of some surrogate markers for the laboratory assessment of pediatric MetS. In this study, the aim is to develop an index, which will be more helpful during the evaluation of further risks detected in morbid obese (MO) children. A total of 235 children with normal body mass index (N-BMI), with varying degrees of obesity; overweight (OW), obese (OB), MO as well as MetS participated in this study. The study was approved by the Institutional Ethical Committee. Informed consent forms were obtained from the parents of the children. Obesity states of the children were classified using BMI percentiles adjusted for age and sex. For the purpose, tabulated data prepared by WHO were used. MetS criteria were defined. Systolic and diastolic blood pressure values were measured. Parameters related to glucose and lipid metabolisms were determined. FBG, insulin (INS), HDL-C, TRG concentrations were determined. Diagnostic Obesity Notation Model Assessment Laboratory (DONMALAB) Index [ln TRG/HDL-C*INS] was introduced. Commonly used insulin resistance (IR) indices such as Homeostatic Model Assessment for IR (HOMA-IR) as well as ratios such as TRG/HDL-C, TRG/HDL-C*INS, HDL-C/TRG*INS, TRG/HDL-C*INS/FBG, log, and ln versions of these ratios were calculated. Results were interpreted using statistical package program (SPSS Version 16.0) for Windows. The data were evaluated using appropriate statistical tests. The degree for statistical significance was defined as 0.05. 35 N, 20 OW, 47 OB, 97 MO children and 36 with MetS were investigated. Mean ± SD values of TRG/HDL-C were 1.27 ± 0.69, 1.86 ± 1.08, 2.15 ± 1.22, 2.48 ± 2.35 and 4.61 ± 3.92 for N, OW, OB, MO and MetS children, respectively. Corresponding values for the DONMALAB index were 2.17 ± 1.07, 3.01 ± 0.94, 3.41 ± 0.93, 3.43 ± 1.08 and 4.32 ± 1.00. TRG/HDL-C ratio significantly differed between N and MetS groups. On the other hand, DONMALAB index exhibited statistically significant differences between N and all the other groups except the OW group. This index was capable of discriminating MO children from those with MetS. Statistically significant elevations were detected in MO children with MetS (p < 0.05). Multiple parameters are commonly used during the assessment of MetS. Upon evaluation of the values obtained for N, OW, OB, MO groups and for MO children with MetS, the [ln TRG/HDL-C*INS] value was unique in discriminating children with MetS.
Keywords: Children, index, laboratory, metabolic syndrome, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7277419 Evaluation of Wavelet Filters for Image Compression
Authors: G. Sadashivappa, K. V. S. AnandaBabu
Abstract:
The aim of this paper to characterize a larger set of wavelet functions for implementation in a still image compression system using SPIHT algorithm. This paper discusses important features of wavelet functions and filters used in sub band coding to convert image into wavelet coefficients in MATLAB. Image quality is measured objectively using peak signal to noise ratio (PSNR) and its variation with bit rate (bpp). The effect of different parameters is studied on different wavelet functions. Our results provide a good reference for application designers of wavelet based coder.Keywords: Wavelet, image compression, sub band, SPIHT, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22257418 Retrieval of Relevant Visual Data in Selected Machine Vision Tasks: Examples of Hardware-based and Software-based Solutions
Authors: Andrzej Śluzek
Abstract:
To illustrate diversity of methods used to extract relevant (where the concept of relevance can be differently defined for different applications) visual data, the paper discusses three groups of such methods. They have been selected from a range of alternatives to highlight how hardware and software tools can be complementarily used in order to achieve various functionalities in case of different specifications of “relevant data". First, principles of gated imaging are presented (where relevance is determined by the range). The second methodology is intended for intelligent intrusion detection, while the last one is used for content-based image matching and retrieval. All methods have been developed within projects supervised by the author.
Keywords: Relevant visual data, gated imaging, intrusion detection, image matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13957417 Physicochemical Characterizations of Marine and River Sediments in the North of France
Authors: Abriak Nor Edine, Zentar Rachid, Achour Raouf, Tran Ngoc Thanh
Abstract:
This work is undertaken to develop a methodology to enhance the management of dredged marine and river sediments in the North of France. The main objective of this study is to determine the main characteristics of these sediments. In this order, physical, mineralogical and chemical properties of both types of sediments are measured. Moreover, their potential impacts on the environment are assessed throughout leaching tests. From the obtained results, the potential of their use in road engineering is discussed.
Keywords: Marine sediments, River sediments, Physicochemical characterizations, Environmental characterizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18007416 SOA-Based Mobile Application for Crime Control in Thailand
Authors: Jintana Khemprasit, Vatcharaporn Esichaikul
Abstract:
Crime is a major societal problem for most of the world's nations. Consequently, the police need to develop new methods to improve their efficiency in dealing with these ever increasing crime rates. Two of the common difficulties that the police face in crime control are crime investigation and the provision of crime information to the general public to help them protect themselves. Crime control in police operations involves the use of spatial data, crime data and the related crime data from different organizations (depending on the nature of the analysis to be made). These types of data are collected from several heterogeneous sources in different formats and from different platforms, resulting in a lack of standardization. Moreover, there is no standard framework for crime data collection, integration and dissemination through mobile devices. An investigation into the current situation in crime control was carried out to identify the needs to resolve these issues. This paper proposes and investigates the use of service oriented architecture (SOA) and the mobile spatial information service in crime control. SOA plays an important role in crime control as an appropriate way to support data exchange and model sharing from heterogeneous sources. Crime control also needs to facilitate mobile spatial information services in order to exchange, receive, share and release information based on location to mobile users anytime and anywhere.Keywords: Crime Control, Geographic Information System (GIS), Mobile GIS, Service Oriented Architecture (SOA).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25337415 Multidimensional and Data Mining Analysis for Property Investment Risk Analysis
Authors: Nur Atiqah Rochin Demong, Jie Lu, Farookh Khadeer Hussain
Abstract:
Property investment in the real estate industry has a high risk due to the uncertainty factors that will affect the decisions made and high cost. Analytic hierarchy process has existed for some time in which referred to an expert-s opinion to measure the uncertainty of the risk factors for the risk analysis. Therefore, different level of experts- experiences will create different opinion and lead to the conflict among the experts in the field. The objective of this paper is to propose a new technique to measure the uncertainty of the risk factors based on multidimensional data model and data mining techniques as deterministic approach. The propose technique consist of a basic framework which includes four modules: user, technology, end-user access tools and applications. The property investment risk analysis defines as a micro level analysis as the features of the property will be considered in the analysis in this paper.Keywords: Uncertainty factors, data mining, multidimensional data model, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29227414 Computational Aspects of Regression Analysis of Interval Data
Authors: Michal Cerny
Abstract:
We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.
Keywords: Linear regression, interval-censored data, computational complexity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14707413 Time-Derivative Estimation of Noisy Movie Data using Adaptive Control Theory
Authors: Soon-Hyun Park, Takami Matsuo
Abstract:
This paper presents an adaptive differentiator of sequential data based on the adaptive control theory. The algorithm is applied to detect moving objects by estimating a temporal gradient of sequential data at a specified pixel. We adopt two nonlinear intensity functions to reduce the influence of noises. The derivatives of the nonlinear intensity functions are estimated by an adaptive observer with σ-modification update law.Keywords: Adaptive estimation, parameter adjustmentlaw, motion detection, temporal gradient, differential filter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18737412 A Test to Express Diagnostic Cohesion of Football Team
Authors: Alexandra O. Savinkina
Abstract:
We proposed to assess the cohesion of a football team by its subject-goal and subject-value unity according to the A.V. Petrovsky theory. Goal unity was measured by the degree of compliance of the priority targets for various players in the team. Values were estimated by the coincidence of the ideas about a perfect football player. On the basis of the provisional diagnosis of the six teams, we had made the lists of goals and values. The tests were piloted on 35 football teams. The results allowed not only to compare quantitatively the cohesion of the different teams, but also to identify subgroups within the team.
Keywords: Cohesion, football, psychodiagnostic, soccer, sports team, value-orientation unity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11627411 Transformation of the Business Model in an Occupational Health Care Company Embedded in an Emerging Personal Data Ecosystem: A Case Study in Finland
Authors: Tero Huhtala, Minna Pikkarainen, Saila Saraniemi
Abstract:
Information technology has long been used as an enabler of exchange for goods and services. Services are evolving from generic to personalized, and the reverse use of customer data has been discussed in both academia and industry for the past few years. This article presents the results of an empirical case study in the area of preventive health care services. The primary data were gathered in workshops, in which future personal data-based services were conceptualized by analyzing future scenarios from a business perspective. The aim of this study is to understand business model transformation in emerging personal data ecosystems. The work was done as a case study in the context of occupational healthcare. The results have implications to theory and practice, indicating that adopting personal data management principles requires transformation of the business model, which, if successfully managed, may provide access to more resources, potential to offer better value, and additional customer channels. These advantages correlate with the broadening of the business ecosystem. Expanding the scope of this study to include more actors would improve the validity of the research. The results draw from existing literature and are based on findings from a case study and the economic properties of the healthcare industry in Finland.
Keywords: Ecosystem, business model, personal data, preventive healthcare.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11407410 Selective Mutation for Genetic Algorithms
Authors: Sung Hoon Jung
Abstract:
In this paper, we propose a selective mutation method for improving the performances of genetic algorithms. In selective mutation, individuals are first ranked and then additionally mutated one bit in a part of their strings which is selected corresponding to their ranks. This selective mutation helps genetic algorithms to fast approach the global optimum and to quickly escape local optima. This results in increasing the performances of genetic algorithms. We measured the effects of selective mutation with four function optimization problems. It was found from extensive experiments that the selective mutation can significantly enhance the performances of genetic algorithms.Keywords: Genetic algorithm, selective mutation, function optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18367409 An Application of the Data Mining Methods with Decision Rule
Authors: Xun Ge, Jianhua Gong
Abstract:
ankings for output of Chinese main agricultural commodity in the world for 1978, 1980, 1990, 2000, 2006, 2007 and 2008 have been released in United Nations FAO Database. Unfortunately, where the ranking of output of Chinese cotton lint in the world for 2008 was missed. This paper uses sequential data mining methods with decision rules filling this gap. This new data mining method will be help to give a further improvement for United Nations FAO Database.
Keywords: Ranking, output of the main agricultural commodity, gross domestic product, decision table, information system, data mining, decision rule
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17107408 LiDAR Based Real Time Multiple Vehicle Detection and Tracking
Authors: Zhongzhen Luo, Saeid Habibi, Martin v. Mohrenschildt
Abstract:
Self-driving vehicle require a high level of situational awareness in order to maneuver safely when driving in real world condition. This paper presents a LiDAR based real time perception system that is able to process sensor raw data for multiple target detection and tracking in dynamic environment. The proposed algorithm is nonparametric and deterministic that is no assumptions and priori knowledge are needed from the input data and no initializations are required. Additionally, the proposed method is working on the three-dimensional data directly generated by LiDAR while not scarifying the rich information contained in the domain of 3D. Moreover, a fast and efficient for real time clustering algorithm is applied based on a radially bounded nearest neighbor (RBNN). Hungarian algorithm procedure and adaptive Kalman filtering are used for data association and tracking algorithm. The proposed algorithm is able to run in real time with average run time of 70ms per frame.Keywords: LiDAR, real-time system, clustering, tracking, data association.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46707407 Water End-Use Classification with Contemporaneous Water-Energy Data and Deep Learning Network
Authors: Khoi A. Nguyen, Rodney A. Stewart, Hong Zhang
Abstract:
‘Water-related energy’ is energy use which is directly or indirectly influenced by changes to water use. Informatics applying a range of mathematical, statistical and rule-based approaches can be used to reveal important information on demand from the available data provided at second, minute or hourly intervals. This study aims to combine these two concepts to improve the current water end use disaggregation problem through applying a wide range of most advanced pattern recognition techniques to analyse the concurrent high-resolution water-energy consumption data. The obtained results have shown that recognition accuracies of all end-uses have significantly increased, especially for mechanised categories, including clothes washer, dishwasher and evaporative air cooler where over 95% of events were correctly classified.
Keywords: Deep learning network, smart metering, water end use, water-energy data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13637406 Watermark Bit Rate in Diverse Signal Domains
Authors: Nedeljko Cvejic, Tapio Sepp
Abstract:
A study of the obtainable watermark data rate for information hiding algorithms is presented in this paper. As the perceptual entropy for wideband monophonic audio signals is in the range of four to five bits per sample, a significant amount of additional information can be inserted into signal without causing any perceptual distortion. Experimental results showed that transform domain watermark embedding outperforms considerably watermark embedding in time domain and that signal decompositions with a high gain of transform coding, like the wavelet transform, are the most suitable for high data rate information hiding. Keywords?Digital watermarking, information hiding, audio watermarking, watermark data rate.
Keywords: Digital watermarking, information hiding, audio watermarking, watermark data rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16287405 Concurrent Access to Complex Entities
Authors: Cosmin Rablou
Abstract:
In this paper we present a way of controlling the concurrent access to data in a distributed application using the Pessimistic Offline Lock design pattern. In our case, the application processes a complex entity, which contains in a hierarchical structure different other entities (objects). It will be shown how the complex entity and the contained entities must be locked in order to control the concurrent access to data.Keywords: Object-oriented programming, Pessimistic Lock, Design pattern, Concurrent access to data, Processing complex entities
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13117404 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning
Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul
Abstract:
In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.Keywords: Electrocardiogram, dictionary learning, sparse coding, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20937403 A Remote Sensing Approach to Calculate Population Using Roads Network Data in Lebanon
Authors: Kamel Allaw, Jocelyne Adjizian Gerard, Makram Chehayeb, Nada Badaro Saliba
Abstract:
In developing countries, such as Lebanon, the demographic data are hardly available due to the absence of the mechanization of population system. The aim of this study is to evaluate, using only remote sensing data, the correlations between the number of population and the characteristics of roads network (length of primary roads, length of secondary roads, total length of roads, density and percentage of roads and the number of intersections). In order to find the influence of the different factors on the demographic data, we studied the degree of correlation between each factor and the number of population. The results of this study have shown a strong correlation between the number of population and the density of roads and the number of intersections.
Keywords: Population, road network, statistical correlations, remote sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9967402 Risk-Management by Numerical Pattern Analysis in Data-Mining
Authors: M. Kargar, R. Mirmiran, F. Fartash, T. Saderi
Abstract:
In this paper a new method is suggested for risk management by the numerical patterns in data-mining. These patterns are designed using probability rules in decision trees and are cared to be valid, novel, useful and understandable. Considering a set of functions, the system reaches to a good pattern or better objectives. The patterns are analyzed through the produced matrices and some results are pointed out. By using the suggested method the direction of the functionality route in the systems can be controlled and best planning for special objectives be done.Keywords: Analysis, Data-mining, Pattern, Risk Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1270