Search results for: graph database
554 The First Integral Approach in Stability Problem of Large Scale Nonlinear Dynamical Systems
Authors: M. Kidouche, H. Habbi, M. Zelmat, S. Grouni
Abstract:
In analyzing large scale nonlinear dynamical systems, it is often desirable to treat the overall system as a collection of interconnected subsystems. Solutions properties of the large scale system are then deduced from the solution properties of the individual subsystems and the nature of the interconnections. In this paper a new approach is proposed for the stability analysis of large scale systems, which is based upon the concept of vector Lyapunov functions and the decomposition methods. The present results make use of graph theoretic decomposition techniques in which the overall system is partitioned into a hierarchy of strongly connected components. We show then, that under very reasonable assumptions, the overall system is stable once the strongly connected subsystems are stables. Finally an example is given to illustrate the constructive methodology proposed.Keywords: Comparison principle, First integral, Large scale system, Lyapunov stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526553 Multi-threshold Approach for License Plate Recognition System
Authors: Siti Norul Huda Sheikh Abdullah, Farshid Pirahan Siah, Nor Hanisah Haji Zainal Abidin, Shahnorbanun Sahran
Abstract:
The objective of this paper is to propose an adaptive multi threshold for image segmentation precisely in object detection. Due to the different types of license plates being used, the requirement of an automatic LPR is rather different for each country. The proposed technique is applied on Malaysian LPR application. It is based on Multi Layer Perceptron trained by back propagation. The proposed adaptive threshold is introduced to find the optimum threshold values. The technique relies on the peak value from the graph of the number object versus specific range of threshold values. The proposed approach has improved the overall performance compared to current optimal threshold techniques. Further improvement on this method is in progress to accommodate real time system specification.
Keywords: Multi-threshold approach, license plate recognition system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521552 Combination of Different Classifiers for Cardiac Arrhythmia Recognition
Authors: M. R. Homaeinezhad, E. Tavakkoli, M. Habibi, S. A. Atyabi, A. Ghaffari
Abstract:
This paper describes a new supervised fusion (hybrid) electrocardiogram (ECG) classification solution consisting of a new QRS complex geometrical feature extraction as well as a new version of the learning vector quantization (LVQ) classification algorithm aimed for overcoming the stability-plasticity dilemma. Toward this objective, after detection and delineation of the major events of ECG signal via an appropriate algorithm, each QRS region and also its corresponding discrete wavelet transform (DWT) are supposed as virtual images and each of them is divided into eight polar sectors. Then, the curve length of each excerpted segment is calculated and is used as the element of the feature space. To increase the robustness of the proposed classification algorithm versus noise, artifacts and arrhythmic outliers, a fusion structure consisting of five different classifiers namely as Support Vector Machine (SVM), Modified Learning Vector Quantization (MLVQ) and three Multi Layer Perceptron-Back Propagation (MLP–BP) neural networks with different topologies were designed and implemented. The new proposed algorithm was applied to all 48 MIT–BIH Arrhythmia Database records (within–record analysis) and the discrimination power of the classifier in isolation of different beat types of each record was assessed and as the result, the average accuracy value Acc=98.51% was obtained. Also, the proposed method was applied to 6 number of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging to 20 different records of the aforementioned database (between– record analysis) and the average value of Acc=95.6% was achieved. To evaluate performance quality of the new proposed hybrid learning machine, the obtained results were compared with similar peer– reviewed studies in this area.Keywords: Feature Extraction, Curve Length Method, SupportVector Machine, Learning Vector Quantization, Multi Layer Perceptron, Fusion (Hybrid) Classification, Arrhythmia Classification, Supervised Learning Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2225551 The Current State of Human Gait Simulator Development
Authors: V. Musalimov, I. Stepanov, Y. Monahov, A. Safonov
Abstract:
This report examines the current state of human gait simulator development based on the human hip joint model. This unit will create a database of human gait types, useful for setting up and calibrating Mechano devices, as well as the creation of new systems of rehabilitation, exoskeletons and walking robots. The system has many opportunities to configure the dimensions and stiffness, while maintaining relative simplicity.Keywords: Hip joint, human gait, physiotherapy, simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292550 Two-Stage Approach for Solving the Multi-Objective Optimization Problem on Combinatorial Configurations
Authors: Liudmyla Koliechkina, Olena Dvirna
Abstract:
The statement of the multi-objective optimization problem on combinatorial configurations is formulated, and the approach to its solution is proposed. The problem is of interest as a combinatorial optimization one with many criteria, which is a model of many applied tasks. The approach to solving the multi-objective optimization problem on combinatorial configurations consists of two stages; the first is the reduction of the multi-objective problem to the single criterion based on existing multi-objective optimization methods, the second stage solves the directly replaced single criterion combinatorial optimization problem by the horizontal combinatorial method. This approach provides the optimal solution to the multi-objective optimization problem on combinatorial configurations, taking into account additional restrictions for a finite number of steps.Keywords: Discrete set, linear combinatorial optimization, multi-objective optimization, multipermutation, Pareto solutions, partial permutation set, permutation, structural graph.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 666549 Use XML Format like a Model of Data Backup
Authors: Souleymane Oumtanaga, Kadjo Tanon Lambert, Koné Tiémoman, Tety Pierre, Dowa N’sreke Florent
Abstract:
Nowadays data backup format doesn-t cease to appear raising so the anxiety on their accessibility and their perpetuity. XML is one of the most promising formats to guarantee the integrity of data. This article suggests while showing one thing man can do with XML. Indeed XML will help to create a data backup model. The main task will consist in defining an application in JAVA able to convert information of a database in XML format and restore them later.
Keywords: Backup, Proprietary format, parser, syntactic tree.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729548 Theoretical Isotope Generator: An Alternative towards Isotope Pattern Calculator
Authors: K. Massila, R. D. Stein, S. M. Suhaizan, A. A. Azlianor
Abstract:
A number of mass spectrometry applications are already available as web-based and windows-based systems to calculate isotope pattern and to display the mass spectrum based on the specific molecular formula besides providing necessary information. These applications were evaluated and compared with our new alternative application called Theoretical Isotope Generator (TIG) in terms of its functionality and features provided to prove this new application is working better and performing well. TIG provides extra features than others, complete with several functionality such as drawing, normalizing and zooming the generated graph that convey with the molecular information in a number of formats by providing the details of the calculation and molecules. Thus, any chemist, students, lecturers and researchers from anywhere could use TIG to gain related information on molecules and their relative intensity.
Keywords: Isotope pattern calculator, mass number, massspectrum, relative intensity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2332547 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.
Keywords: Anti-spoofing, CNN, fingerprint recognition, loss function, optimizer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419546 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem
Authors: Y. Wang
Abstract:
The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.
Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1009545 The Vertex and Edge Irregular Total Labeling of an Amalgamation of Two Isomorphic Cycles
Authors: Nurdin
Abstract:
Suppose G(V,E) is a graph, a function f : V \cup E \to \{1, 2, 3, \cdots, k\} is called the total edge(vertex) irregular k-labelling for G such that for each two edges are different having distinct weights. The total edge(vertex) irregularity strength of G, denoted by tes(G)(tvs(G), is the smallest k positive integers such that G has a total edge(vertex) irregular k-labelling. In this paper, we determined the total edge(vertex) irregularity strength of an amalgamation of two isomorphic cycles. The total edge irregularity strength and the total vertex irregularity strength of two isomorphic cycles on n vertices are \lceil (2n+2)/3 \rceil and \lceil 2n/3 \rceil for n \geq 3, respectively.
Keywords: Amalgamation of graphs, irregular labelling, irregularity strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928544 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite
Authors: F. Lazzeri, I. Reiter
Abstract:
Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.
Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288543 Scattering Operator and Spectral Clustering for Ultrasound Images: Application on Deep Venous Thrombi
Authors: Thibaud Berthomier, Ali Mansour, Luc Bressollette, Frédéric Le Roy, Dominique Mottier, Léo Fréchier, Barthélémy Hermenault
Abstract:
Deep Venous Thrombosis (DVT) occurs when a thrombus is formed within a deep vein (most often in the legs). This disease can be deadly if a part or the whole thrombus reaches the lung and causes a Pulmonary Embolism (PE). This disorder, often asymptomatic, has multifactorial causes: immobilization, surgery, pregnancy, age, cancers, and genetic variations. Our project aims to relate the thrombus epidemiology (origins, patient predispositions, PE) to its structure using ultrasound images. Ultrasonography and elastography were collected using Toshiba Aplio 500 at Brest Hospital. This manuscript compares two classification approaches: spectral clustering and scattering operator. The former is based on the graph and matrix theories while the latter cascades wavelet convolutions with nonlinear modulus and averaging operators.Keywords: Deep venous thrombosis, ultrasonography, elastography, scattering operator, wavelet, spectral clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1177542 Evaluation of Risk Attributes Driven by Periodically Changing System Functionality
Authors: Dariusz Dymek, Leszek Kotulski
Abstract:
Modeling of the distributed systems allows us to represent the whole its functionality. The working system instance rarely fulfils the whole functionality represented by model; usually some parts of this functionality should be accessible periodically. The reporting system based on the Data Warehouse concept seams to be an intuitive example of the system that some of its functionality is required only from time to time. Analyzing an enterprise risk associated with the periodical change of the system functionality, we should consider not only the inaccessibility of the components (object) but also their functions (methods), and the impact of such a situation on the system functionality from the business point of view. In the paper we suggest that the risk attributes should be estimated from risk attributes specified at the requirements level (Use Case in the UML model) on the base of the information about the structure of the model (presented at other levels of the UML model). We argue that it is desirable to consider the influence of periodical changes in requirements on the enterprise risk estimation. Finally, the proposition of such a solution basing on the UML system model is presented.Keywords: Risk assessing, software maintenance, UML, graph grammars.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383541 A Mark-Up Approach to Add Value
Authors: Ivaylo I. Atanasov, Evelina N.Pencheva
Abstract:
This paper presents a mark-up approach to service creation in Next Generation Networks. The approach allows deriving added value from network functions exposed by Parlay/OSA (Open Service Access) interfaces. With OSA interfaces service logic scripts might be executed both on callrelated and call-unrelated events. To illustrate the approach XMLbased language constructions for data and method definitions, flow control, time measuring and supervision and database access are given and an example of OSA application is considered.
Keywords: Service creation, mark-up approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685540 A Comparison of Image Data Representations for Local Stereo Matching
Authors: André Smith, Amr Abdel-Dayem
Abstract:
The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915539 Understanding Health Behavior Using Social Network Analysis
Authors: Namrata Mishra
Abstract:
Health of a person plays a vital role in the collective health of his community and hence the well-being of the society as a whole. But, in today’s fast paced technology driven world, health issues are increasingly being associated with human behaviors – their lifestyle. Social networks have tremendous impact on the health behavior of individuals. Many researchers have used social network analysis to understand human behavior that implicates their social and economic environments. It would be interesting to use a similar analysis to understand human behaviors that have health implications. This paper focuses on concepts of those behavioural analyses that have health implications using social networks analysis and provides possible algorithmic approaches. The results of these approaches can be used by the governing authorities for rolling out health plans, benefits and take preventive measures, while the pharmaceutical companies can target specific markets, helping health insurance companies to better model their insurance plans.Keywords: Health behaviors, social network analysis, directed graph, breadth first search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907538 A Comparative Study of Page Ranking Algorithms for Information Retrieval
Authors: Ashutosh Kumar Singh, Ravi Kumar P
Abstract:
This paper gives an introduction to Web mining, then describes Web Structure mining in detail, and explores the data structure used by the Web. This paper also explores different Page Rank algorithms and compare those algorithms used for Information Retrieval. In Web Mining, the basics of Web mining and the Web mining categories are explained. Different Page Rank based algorithms like PageRank (PR), WPR (Weighted PageRank), HITS (Hyperlink-Induced Topic Search), DistanceRank and DirichletRank algorithms are discussed and compared. PageRanks are calculated for PageRank and Weighted PageRank algorithms for a given hyperlink structure. Simulation Program is developed for PageRank algorithm because PageRank is the only ranking algorithm implemented in the search engine (Google). The outputs are shown in a table and chart format.Keywords: Web Mining, Web Structure, Web Graph, LinkAnalysis, PageRank, Weighted PageRank, HITS, DistanceRank, DirichletRank,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2834537 A Visualized Framework for Representing Uncertain and Incomplete Temporal Knowledge
Authors: Yue Wang, Jixin Ma, Brian Knight
Abstract:
This paper presents a visualized computer aided case tool for non-expert, called Visual Time, for representing and reasoning about incomplete and uncertain temporal information. It is both expressive and versatile, allowing logical conjunctions and disjunctions of both absolute and relative temporal relations, such as “Before”, “Meets”, “Overlaps”, “Starts”, “During”, and “Finishes”, etc. In terms of a visualized framework, Visual Time provides a user-friendly environment for describing scenarios with rich temporal structure in natural language, which can be formatted as structured temporal phrases and modeled in terms of Temporal Relationship Diagrams (TRD). A TRD can be automatically and visually transformed into a corresponding Time Graph, supported by automatic consistency checker that derives a verdict to confirm if a given scenario is temporally consistent or inconsistent.
Keywords: Time Visualization, Uncertainty, Incompleteness, Consistency Checking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1511536 A Conceptual Query-Driven Design Framework for Data Warehouse
Authors: Resmi Nair, Campbell Wilson, Bala Srinivasan
Abstract:
Data warehouse is a dedicated database used for querying and reporting. Queries in this environment show special characteristics such as multidimensionality and aggregation. Exploiting the nature of queries, in this paper we propose a query driven design framework. The proposed framework is general and allows a designer to generate a schema based on a set of queries.Keywords: Conceptual schema, data warehouse, queries, requirements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004535 Applications of Genetic Programming in Data Mining
Authors: Saleh Mesbah Elkaffas, Ahmed A. Toony
Abstract:
This paper details the application of a genetic programming framework for induction of useful classification rules from a database of income statements, balance sheets, and cash flow statements for North American public companies. Potentially interesting classification rules are discovered. Anomalies in the discovery process merit further investigation of the application of genetic programming to the dataset for the problem domain.Keywords: Genetic programming, data mining classification rule.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544534 Combining Similarity and Dissimilarity Measurements for the Development of QSAR Models Applied to the Prediction of Antiobesity Activity of Drugs
Authors: Irene Luque Ruiz, Manuel Urbano Cuadrado, Miguel Ángel Gómez-Nieto
Abstract:
In this paper we study different similarity based approaches for the development of QSAR model devoted to the prediction of activity of antiobesity drugs. Classical similarity approaches are compared regarding to dissimilarity models based on the consideration of the calculation of Euclidean distances between the nonisomorphic fragments extracted in the matching process. Combining the classical similarity and dissimilarity approaches into a new similarity measure, the Approximate Similarity was also studied, and better results were obtained. The application of the proposed method to the development of quantitative structure-activity relationships (QSAR) has provided reliable tools for predicting of inhibitory activity of drugs. Acceptable results were obtained for the models presented here.Keywords: Graph similarity, Nonisomorphic dissimilarity, Approximate similarity, Drugs activity prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556533 A Convolutional Neural Network-Based Vehicle Theft Detection, Location, and Reporting System
Authors: Michael Moeti, Khuliso Sigama, Thapelo Samuel Matlala
Abstract:
One of the principal challenges that the world is confronted with is insecurity. The crime rate is increasing exponentially, and protecting our physical assets, especially in the motorist sector, is becoming impossible when applying our own strength. The need to develop technological solutions that detect and report theft without any human interference is inevitable. This is critical, especially for vehicle owners, to ensure theft detection and speedy identification towards recovery efforts in cases where a vehicle is missing or attempted theft is taking place. The vehicle theft detection system uses Convolutional Neural Network (CNN) to recognize the driver's face captured using an installed mobile phone device. The location identification function uses a Global Positioning System (GPS) to determine the real-time location of the vehicle. Upon identification of the location, Global System for Mobile Communications (GSM) technology is used to report or notify the vehicle owner about the whereabouts of the vehicle. The installed mobile app was implemented by making use of Python as it is undoubtedly the best choice in machine learning. It allows easy access to machine learning algorithms through its widely developed library ecosystem. The graphical user interface was developed by making use of JAVA as it is better suited for mobile development. Google's online database (Firebase) was used as a means of storage for the application. The system integration test was performed using a simple percentage analysis. 60 vehicle owners participated in this study as a sample, and questionnaires were used in order to establish the acceptability of the system developed. The result indicates the efficiency of the proposed system, and consequently, the paper proposes that the use of the system can effectively monitor the vehicle at any given place, even if it is driven outside its normal jurisdiction. More so, the system can be used as a database to detect, locate and report missing vehicles to different security agencies.
Keywords: Convolutional Neural Network, CNN, location identification, tracking, GPS, GSM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 413532 Stroke Extraction and Approximation with Interpolating Lagrange Curves
Authors: Bence Kővári, ZSolt Kertész
Abstract:
This paper proposes a stroke extraction method for use in off-line signature verification. After giving a brief overview of the current ongoing researches an algorithm is introduced for detecting and following strokes in static images of signatures. Problems like the handling of junctions and variations in line width and line intensity are discussed in detail. Results are validated by both using an existing on-line signature database and by employing image registration methods.
Keywords: Stroke extraction, spline fitting, off-line signatureverification, image registration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1975531 Wavelet Compression of ECG Signals Using SPIHT Algorithm
Authors: Mohammad Pooyan, Ali Taheri, Morteza Moazami-Goudarzi, Iman Saboori
Abstract:
In this paper we present a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here we use a modified version of SPIHT for one dimensional signals. We applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.
Keywords: ECG compression, wavelet, SPIHT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2397530 Tree-on-DAG for Data Aggregation in Sensor Networks
Authors: Prakash G L, Thejaswini M, S H Manjula, K R Venugopal, L M Patnaik
Abstract:
Computing and maintaining network structures for efficient data aggregation incurs high overhead for dynamic events where the set of nodes sensing an event changes with time. Moreover, structured approaches are sensitive to the waiting time that is used by nodes to wait for packets from their children before forwarding the packet to the sink. An optimal routing and data aggregation scheme for wireless sensor networks is proposed in this paper. We propose Tree on DAG (ToD), a semistructured approach that uses Dynamic Forwarding on an implicitly constructed structure composed of multiple shortest path trees to support network scalability. The key principle behind ToD is that adjacent nodes in a graph will have low stretch in one of these trees in ToD, thus resulting in early aggregation of packets. Based on simulations on a 2,000-node Mica2- based network, we conclude that efficient aggregation in large-scale networks can be achieved by our semistructured approach.Keywords: Aggregation, Packet Merging, Query Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1930529 Enhancing the Error-Correcting Performance of LDPC Codes through an Efficient Use of Decoding Iterations
Authors: Insah Bhurtah, P. Clarel Catherine, K. M. Sunjiv Soyjaudah
Abstract:
The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Keywords: Error-correcting codes, information theory, low-density parity-check codes, sum-product algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706528 Does Bio-Demographic Diversity Influence Team Innovation through Participation Safety Climate and Team Reflexivity?
Authors: Maznah Abdullah, Mohammed Quaddus
Abstract:
Bio-demographic diversity which refers to age and gender of members in a team, has been frequently identified to influence team innovation directly. As the theories expanded, biodemographic diversity was suggested to influence team innovation via psychosocial trait and interaction process. This study examines those suggestions, in which psychosocial trait and interaction process were operationalized as 'participation safety climate' and 'team reflexivity' respectively. The role of team reflexivity as a mediator to participation safety climate and team innovation was also assessed. Due to a small number of teams involved in the study, data were analyzed by using a PLS-graph. While the results show only gender is significantly related to the participation safety climate, which in turn influences team reflexivity and team innovation, there is no statistical evidence that team reflexivity mediates the impact of participation safety climate on team innovation.
Keywords: Bio-demographic diversity, participation safetyclimate, team innovation, team reflexivity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1368527 A Formatting Method for Transforming XML Data into HTML
Authors: Zhe JIN, Motomichi TOYAMA
Abstract:
In this paper, we propose a fixed formatting method of PPX(Pretty Printer for XML). PPX is a query language for XML database which has extensive formatting capability that produces HTML as the result of a query. The fixed formatting method is to completely specify the combination of variables and layout specification operators within the layout expression of the GENERATE clause of PPX. In the experiment, a quick comparison shows that PPX requires far less description compared to XSLT or XQuery programs doing the same tasks.
Keywords: PPX, XML, HTML, XSLT, XQuery, fixed formatting method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363526 Over-Height Vehicle Detection in Low Headroom Roads Using Digital Video Processing
Authors: Vahid Khorramshahi, Alireza Behrad, Neeraj K. Kanhere
Abstract:
In this paper we present a new method for over-height vehicle detection in low headroom streets and highways using digital video possessing. The accuracy and the lower price comparing to present detectors like laser radars and the capability of providing extra information like speed and height measurement make this method more reliable and efficient. In this algorithm the features are selected and tracked using KLT algorithm. A blob extraction algorithm is also applied using background estimation and subtraction. Then the world coordinates of features that are inside the blobs are estimated using a noble calibration method. As, the heights of the features are calculated, we apply a threshold to select overheight features and eliminate others. The over-height features are segmented using some association criteria and grouped using an undirected graph. Then they are tracked through sequential frames. The obtained groups refer to over-height vehicles in a scene.Keywords: Feature extraction, over-height vehicle detection, traffic monitoring, vehicle tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2827525 A Relationship Extraction Method from Literary Fiction Considering Korean Linguistic Features
Authors: Hee-Jeong Ahn, Kee-Won Kim, Seung-Hoon Kim
Abstract:
The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.
Keywords: Data mining, Korean linguistic feature, literary fiction, relationship extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794