Search results for: informative theoretic similarity metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1447

Search results for: informative theoretic similarity metrics

1417 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of Video Quality Assessment (VQA) and metrics like VMAF, SSIM etc. are said to be some of the best performing metrics, but the evaluation of these metrics is dominantly done on professionally taken video contents using professional tools, lighting conditions etc. No study particularly pinpointing the performance of the metrics on the contents taken by users on very commonly available devices has been done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective VQA metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and android smartphone, an IOS smartphone and a DSLR. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied on addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics didn’t perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using HEVC codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, SSIM and VMAF have performed significantly better.

Keywords: distortion, metrics, performance, resolution, video quality assessment

Procedia PDF Downloads 182
1416 A Novel Unconditionally Secure and Lightweight Bipartite Key Agreement Protocol

Authors: Jun Liu

Abstract:

This paper introduces a new bipartite key agreement (2PKA) protocol which provides unconditionally security and lightweight. The unconditional security is stemmed from the known impossibility of distinguishing a particular solution from all possible solutions of an underdetermined system of equations. The indistinguishability prevents an adversary from inferring to the common secret-key even with the access to an unlimited amount of computing capability. This new 2PKA protocol is also lightweight because that the calculation of a common secret-key only makes use of simple modular arithmetic. This information-theoretic 2PKA scheme provides the desired features of Key Confirmation (KC), Session Key (SK) security, Know-Key (KK) security, protection of individual privacy, and uniformly distributed value of a common key under prime modulus.

Keywords: bipartite key agreement, information-theoretic cryptography, perfect security, lightweight

Procedia PDF Downloads 33
1415 Fuzzy Set Approach to Study Appositives and Its Impact Due to Positional Alterations

Authors: E. Mike Dison, T. Pathinathan

Abstract:

Computing with Words (CWW) and Possibilistic Relational Universal Fuzzy (PRUF) are the two concepts which widely represent and measure the vaguely defined natural phenomenon. In this paper, we study the positional alteration of the phrases by which the impact of a natural language proposition gets affected and/or modified. We observe the gradations due to sensitivity/feeling of a statement towards the positional alterations. We derive the classification and modification of the meaning of words due to the positional alteration. We present the results with reference to set theoretic interpretations.

Keywords: appositive, computing with words, possibilistic relational universal fuzzy (PRUF), semantic sentiment analysis, set-theoretic interpretations

Procedia PDF Downloads 130
1414 Empirical Study of Partitions Similarity Measures

Authors: Abdelkrim Alfalah, Lahcen Ouarbya, John Howroyd

Abstract:

This paper investigates and compares the performance of four existing distances and similarity measures between partitions. The partition measures considered are Rand Index (RI), Adjusted Rand Index (ARI), Variation of Information (VI), and Normalised Variation of Information (NVI). This work investigates the ability of these partition measures to capture three predefined intuitions: the variation within randomly generated partitions, the sensitivity to small perturbations, and finally the independence from the dataset scale. It has been shown that the Adjusted Rand Index performed well overall, with regards to these three intuitions.

Keywords: clustering, comparing partitions, similarity measure, partition distance, partition metric, similarity between partitions, clustering comparison.

Procedia PDF Downloads 153
1413 Understanding Cyber Kill Chains: Optimal Allocation of Monitoring Resources Using Cooperative Game Theory

Authors: Roy. H. A. Lindelauf

Abstract:

Cyberattacks are complex processes consisting of multiple interwoven tasks conducted by a set of agents. Interdictions and defenses against such attacks often rely on cyber kill chain (CKC) models. A CKC is a framework that tries to capture the actions taken by a cyber attacker. There exists a growing body of literature on CKCs. Most of this work either a) describes the CKC with respect to one or more specific cyberattacks or b) discusses the tools and technologies used by the attacker at each stage of the CKC. Defenders, facing scarce resources, have to decide where to allocate their resources given the CKC and partial knowledge on the tools and techniques attackers use. In this presentation CKCs are analyzed through the lens of covert projects, i.e., interrelated tasks that have to be conducted by agents (human and/or computer) with the aim of going undetected. Various aspects of covert project models have been studied abundantly in the operations research and game theory domain, think of resource-limited interdiction actions that maximally delay completion times of a weapons project for instance. This presentation has investigated both cooperative and non-cooperative game theoretic covert project models and elucidated their relation to CKC modelling. To view a CKC as a covert project each step in the CKC is broken down into tasks and there are players of which each one is capable of executing a subset of the tasks. Additionally, task inter-dependencies are represented by a schedule. Using multi-glove cooperative games it is shown how a defender can optimize the allocation of his scarce resources (what, where and how to monitor) against an attacker scheduling a CKC. This study presents and compares several cooperative game theoretic solution concepts as metrics for assigning resources to the monitoring of agents.

Keywords: cyber defense, cyber kill chain, game theory, information warfare techniques

Procedia PDF Downloads 118
1412 Software Component Identification from Its Object-Oriented Code: Graph Metrics Based Approach

Authors: Manel Brichni, Abdelhak-Djamel Seriai

Abstract:

Systems are increasingly complex. To reduce their complexity, an abstract view of the system can simplify its development. To overcome this problem, we propose a method to decompose systems into subsystems while reducing their coupling. These subsystems represent components. Consisting of an existing object-oriented systems, the main idea of our approach is based on modelling as graphs all entities of an oriented object source code. Such modelling is easy to handle, so we can apply restructuring algorithms based on graph metrics. The particularity of our approach consists in integrating in addition to standard metrics, such as coupling and cohesion, some graph metrics giving more precision during the components identi cation. To treat this problem, we relied on the ROMANTIC approach that proposed a component-based software architecture recovery from an object oriented system.

Keywords: software reengineering, software component and interfaces, metrics, graphs

Procedia PDF Downloads 476
1411 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications

Authors: K. P. Sandesh, M. H. Suman

Abstract:

Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.

Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms

Procedia PDF Downloads 485
1410 Tool for Determining the Similarity between Two Web Applications

Authors: Doru Anastasiu Popescu, Raducanu Dragos Ionut

Abstract:

In this paper the presentation of a tool which measures the similarity between two websites is made. The websites are compound only from webpages created with HTML. The tool uses three ways of calculating the similarity between two websites based on certain results already published. The first way compares all the webpages within a website, the second way compares a webpage with all the pages within the second website and the third way compares two webpages. Java programming language and technologies such as spring, Jsoup, log4j were used for the implementation of the tool.

Keywords: Java, Jsoup, HTM, spring

Procedia PDF Downloads 356
1409 Impact of Similarity Ratings on Human Judgement

Authors: Ian A. McCulloh, Madelaine Zinser, Jesse Patsolic, Michael Ramos

Abstract:

Recommender systems are a common artificial intelligence (AI) application. For any given input, a search system will return a rank-ordered list of similar items. As users review returned items, they must decide when to halt the search and either revise search terms or conclude their requirement is novel with no similar items in the database. We present a statistically designed experiment that investigates the impact of similarity ratings on human judgement to conclude a search item is novel and halt the search. 450 participants were recruited from Amazon Mechanical Turk to render judgement across 12 decision tasks. We find the inclusion of ratings increases the human perception that items are novel. Percent similarity increases novelty discernment when compared with star-rated similarity or the absence of a rating. Ratings reduce the time to decide and improve decision confidence. This suggests the inclusion of similarity ratings can aid human decision-makers in knowledge search tasks.

Keywords: ratings, rankings, crowdsourcing, empirical studies, user studies, similarity measures, human-centered computing, novelty in information retrieval

Procedia PDF Downloads 92
1408 Text Similarity in Vector Space Models: A Comparative Study

Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge

Abstract:

Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.

Keywords: big data, patent, text embedding, text similarity, vector space model

Procedia PDF Downloads 144
1407 Failure Analysis and Verification Using an Integrated Method for Automotive Electric/Electronic Systems

Authors: Lei Chen, Jian Jiao, Tingdi Zhao

Abstract:

Failures of automotive electric/electronic systems, which are universally considered to be safety-critical and software-intensive, may cause catastrophic accidents. Analysis and verification of failures in these kinds of systems is a big challenge with increasing system complexity. Model-checking is often employed to allow formal verification by ensuring that the system model conforms to specified safety properties. The system-level effects of failures are established, and the effects on system behavior are observed through the formal verification. A hazard analysis technique, called Systems-Theoretic Process Analysis, is capable of identifying design flaws which may cause potential failure hazardous, including software and system design errors and unsafe interactions among multiple system components. This paper provides a concept on how to use model-checking integrated with Systems-Theoretic Process Analysis to perform failure analysis and verification of automotive electric/electronic systems. As a result, safety requirements are optimized, and failure propagation paths are found. Finally, an automotive electric/electronic system case study is used to verify the effectiveness and practicability of the method.

Keywords: failure analysis and verification, model checking, system-theoretic process analysis, automotive electric/electronic system

Procedia PDF Downloads 95
1406 Static vs. Stream Mining Trajectories Similarity Measures

Authors: Musaab Riyadh, Norwati Mustapha, Dina Riyadh

Abstract:

Trajectory similarity can be defined as the cost of transforming one trajectory into another based on certain similarity method. It is the core of numerous mining tasks such as clustering, classification, and indexing. Various approaches have been suggested to measure similarity based on the geometric and dynamic properties of trajectory, the overlapping between trajectory segments, and the confined area between entire trajectories. In this article, an evaluation of these approaches has been done based on computational cost, usage memory, accuracy, and the amount of data which is needed in advance to determine its suitability to stream mining applications. The evaluation results show that the stream mining applications support similarity methods which have low computational cost and memory, single scan on data, and free of mathematical complexity due to the high-speed generation of data.

Keywords: global distance measure, local distance measure, semantic trajectory, spatial dimension, stream data mining

Procedia PDF Downloads 373
1405 Discovering the Dimension of Abstractness: Structure-Based Model that Learns New Categories and Categorizes on Different Levels of Abstraction

Authors: Georgi I. Petkov, Ivan I. Vankov, Yolina A. Petrova

Abstract:

A structure-based model of category learning and categorization at different levels of abstraction is presented. The model compares different structures and expresses their similarity implicitly in the forms of mappings. Based on this similarity, the model can categorize different targets either as members of categories that it already has or creates new categories. The model is novel using two threshold parameters to evaluate the structural correspondence. If the similarity between two structures exceeds the higher threshold, a new sub-ordinate category is created. Vice versa, if the similarity does not exceed the higher threshold but does the lower one, the model creates a new category on higher level of abstraction.

Keywords: analogy-making, categorization, learning of categories, abstraction, hierarchical structure

Procedia PDF Downloads 161
1404 Graph Similarity: Algebraic Model and Its Application to Nonuniform Signal Processing

Authors: Nileshkumar Vishnav, Aditya Tatu

Abstract:

A recent approach of representing graph signals and graph filters as polynomials is useful for graph signal processing. In this approach, the adjacency matrix plays pivotal role; instead of the more common approach involving graph-Laplacian. In this work, we follow the adjacency matrix based approach and corresponding algebraic signal model. We further expand the theory and introduce the concept of similarity of two graphs. The similarity of graphs is useful in that key properties (such as filter-response, algebra related to graph) get transferred from one graph to another. We demonstrate potential applications of the relation between two similar graphs, such as nonuniform filter design, DTMF detection and signal reconstruction.

Keywords: graph signal processing, algebraic signal processing, graph similarity, isospectral graphs, nonuniform signal processing

Procedia PDF Downloads 322
1403 Exploration of Various Metrics for Partitioning of Cellular Automata Units for Efficient Reconfiguration of Field Programmable Gate Arrays (FPGAs)

Authors: Peter Tabatt, Christian Siemers

Abstract:

Using FPGA devices to improve the behavior of time-critical parts of embedded systems is a proven concept for years. With reconfigurable FPGA devices, the logical blocks can be partitioned and grouped into static and dynamic parts. The dynamic parts can be reloaded 'on demand' at runtime. This work uses cellular automata, which are constructed through compilation from (partially restricted) ANSI-C sources, to determine the suitability of various metrics for optimal partitioning. Significant metrics, in this case, are for example the area on the FPGA device for the partition, the pass count for loop constructs and communication characteristics to other partitions. With successful partitioning, it is possible to use smaller FPGA devices for the same requirements as with not reconfigurable FPGA devices or – vice versa – to use the same FPGAs for larger programs.

Keywords: reconfigurable FPGA, cellular automata, partitioning, metrics, parallel computing

Procedia PDF Downloads 245
1402 Clustering of Association Rules of ISIS & Al-Qaeda Based on Similarity Measures

Authors: Tamanna Goyal, Divya Bansal, Sanjeev Sofat

Abstract:

In world-threatening terrorist attacks, where early detection, distinction, and prediction are effective diagnosis techniques and for functionally accurate and precise analysis of terrorism data, there are so many data mining & statistical approaches to assure accuracy. The computational extraction of derived patterns is a non-trivial task which comprises specific domain discovery by means of sophisticated algorithm design and analysis. This paper proposes an approach for similarity extraction by obtaining the useful attributes from the available datasets of terrorist attacks and then applying feature selection technique based on the statistical impurity measures followed by clustering techniques on the basis of similarity measures. On the basis of degree of participation of attributes in the rules, the associative dependencies between the attacks are analyzed. Consequently, to compute the similarity among the discovered rules, we applied a weighted similarity measure. Finally, the rules are grouped by applying using hierarchical clustering. We have applied it to an open source dataset to determine the usability and efficiency of our technique, and a literature search is also accomplished to support the efficiency and accuracy of our results.

Keywords: association rules, clustering, similarity measure, statistical approaches

Procedia PDF Downloads 294
1401 A Similarity/Dissimilarity Measure to Biological Sequence Alignment

Authors: Muhammad A. Khan, Waseem Shahzad

Abstract:

Analysis of protein sequences is carried out for the purpose to discover their structural and ancestry relationship. Sequence similarity determines similar protein structures, similar function, and homology detection. Biological sequences composed of amino acid residues or nucleotides provide significant information through sequence alignment. In this paper, we present a new similarity/dissimilarity measure to sequence alignment based on the primary structure of a protein. The approach finds the distance between the two given sequences using the novel sequence alignment algorithm and a mathematical model. The algorithm runs at a time complexity of O(n²). A distance matrix is generated to construct a phylogenetic tree of different species. The new similarity/dissimilarity measure outperforms other existing methods.

Keywords: alignment, distance, homology, mathematical model, phylogenetic tree

Procedia PDF Downloads 155
1400 Back to Basics: Redefining Quality Measurement for Hybrid Software Development Organizations

Authors: Satya Pradhan, Venky Nanniyur

Abstract:

As the software industry transitions from a license-based model to a subscription-based Software-as-a-Service (SaaS) model, many software development groups are using a hybrid development model that incorporates Agile and Waterfall methodologies in different parts of the organization. The traditional metrics used for measuring software quality in Waterfall or Agile paradigms do not apply to this new hybrid methodology. In addition, to respond to higher quality demands from customers and to gain a competitive advantage in the market, many companies are starting to prioritize quality as a strategic differentiator. As a result, quality metrics are included in the decision-making activities all the way up to the executive level, including board of director reviews. This paper presents key challenges associated with measuring software quality in organizations using the hybrid development model. We introduce a framework called Prevention-Inspection-Evaluation-Removal (PIER) to provide a comprehensive metric definition for hybrid organizations. The framework includes quality measurements, quality enforcement, and quality decision points at different organizational levels and project milestones. The metrics framework defined in this paper is being used for all Cisco systems products used in customer premises. We present several field metrics for one product portfolio (enterprise networking) to show the effectiveness of the proposed measurement system. As the results show, this metrics framework has significantly improved in-process defect management as well as field quality.

Keywords: quality management system, quality metrics framework, quality metrics, agile, waterfall, hybrid development system

Procedia PDF Downloads 145
1399 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 403
1398 Analytical Similarity Assessment of Bevacizumab Biosimilar Candidate MB02 Using Multiple State-of-the-Art Assays

Authors: Marie-Elise Beydon, Daniel Sacristan, Isabel Ruppen

Abstract:

MB02 (Alymsys®) is a candidate biosimilar to bevacizumab, which was developed against the reference product (RP) Avastin® sourced from both the European Union (EU) and United States (US). MB02 has been extensively characterized comparatively to Avastin® at a physicochemical and biological level using sensitive orthogonal state-of-the-art analytical methods. MB02 has been demonstrated similar to the RP with regard to its primary and higher-order structure, post- and co-translational profiles such as glycosylation, charge, and size variants. Specific focus has been put on the characterization of Fab-related activities, such as binding to VEGF A 165, which directly reflect the bevacizumab mechanism of action. Fc-related functionality was also investigated, including binding to FcRn, which is indicative of antibodies' half-life. The data generated during the analytical similarity assessment demonstrate the high analytical similarity of MB02 to its RP.

Keywords: analytical similarity, bevacizumab, biosimilar, MB02

Procedia PDF Downloads 246
1397 Problems Occurring in the Process of Audit by Taking into Consideration their Theoretic Aspects against the Background of Reforms Conducted in a Country: The Example of Georgia

Authors: Levan Sabauri

Abstract:

The purpose of this article is an examination of the meaning of theoretic aspects of audit in the context of solving of specific problems of the audit. The audit’s aim is the estimation of financial statements by the auditor, i.e. if they are prepared according to the basic requirements of current financial statements. By examination of concrete examples, we can clearly see problems created in an audit and in often cases, those contradictions which can be caused by incompliance of matters regulated by legislation and by reality. An important part of this work is the analysis of reform in the direction of business accounting, statements and audit in Georgia and its comparison with EU countries. In the article, attention is concentrated on the analysis of specific problems of auditing practice and ways of their solving by taking into consideration theoretical aspects of the audit are proposed.

Keywords: audit, auditor, auditors’ ethic code, auditor’s risk, financial statement, objectivity

Procedia PDF Downloads 331
1396 A Word-to-Vector Formulation for Word Representation

Authors: Sandra Rizkallah, Amir F. Atiya

Abstract:

This work presents a novel word to vector representation that is based on embedding the words into a sphere, whereby the dot product of the corresponding vectors represents the similarity between any two words. Embedding the vectors into a sphere enabled us to take into consideration the antonymity between words, not only the synonymity, because of the suitability to handle the polarity nature of words. For example, a word and its antonym can be represented as a vector and its negative. Moreover, we have managed to extract an adequate vocabulary. The obtained results show that the proposed approach can capture the essence of the language, and can be generalized to estimate a correct similarity of any new pair of words.

Keywords: natural language processing, word to vector, text similarity, text mining

Procedia PDF Downloads 242
1395 Behaviour of Non-local Correlations and Quantum Information Theoretic Measures in Frustrated Molecular Wheels

Authors: Amit Tribedi

Abstract:

Genuine Quantumness present in Quantum Systems is the resource for implementing Quantum Information and Computation Protocols which can outperform the classical counterparts. These Quantumness measures encompass non-local ones known as quantum entanglement (QE) and quantum information theoretic (QIT) ones, e.g. Quantum Discord (QD). In this paper, some well-known measures of QE and QD in some wheel-like frustrated molecular magnetic systems have been studied. One of the systems has already been synthesized using coordination chemistry, and the other is hypothetical, where the dominant interaction is the spin-spin exchange interaction. Exact analytical methods and exact numerical diagonalization methods have been used. Some counter-intuitive non-trivial features, like non-monotonicity of quantum correlations with temperature, persistence of multipartite entanglement over bipartite ones etc. indicated by the behaviour of the correlations and the QIT measures have been found. The measures, being operational ones, can be used to realize the resource of Quantumness in experiments.

Keywords: 0D Magnets, discord, entanglement, frustration

Procedia PDF Downloads 202
1394 Quantifying Spatiotemporal Patterns of Past and Future Urbanization Trends in El Paso, Texas and Their Impact on Electricity Consumption

Authors: Joanne Moyer

Abstract:

El Paso, Texas is a southwest border city that has experienced continuous growth within the last 15-years. Understanding the urban growth trends and patterns using data from the National Land Cover Database (NLCD) and landscape metrics, provides a quantitative description of growth. Past urban growth provided a basis to predict 2031 future land-use for El Paso using the CA-Markov model. As a consequence of growth, an increase in demand of resources follows. Using panel data analysis, an understanding of the relation between landscape metrics and electricity consumption is further analyzed. The studies’ findings indicate that past growth focused within three districts within the City of El Paso. The landscape metrics suggest as the city has grown, fragmentation has decreased. Alternatively, the landscape metrics for the projected 2031 land-use indicates possible fragmentation within one of these districts. Panel data suggests electricity consumption and mean patch area landscape metric are positively correlated. The study provides local decision makers to make informed decisions for policies and urban planning to ensure a future sustainable community.

Keywords: landscape metrics, CA-Markov, El Paso, Texas, panel data

Procedia PDF Downloads 112
1393 A Multimodal Approach to Improve the Performance of Biometric System

Authors: Chander Kant, Arun Kumar

Abstract:

Biometric systems automatically recognize an individual based on his/her physiological and behavioral characteristics. There are also some traits like weight, age, height etc. that may not provide reliable user recognition because of there common and temporary nature. These traits are called soft bio metric traits. Although soft bio metric traits are lack of permanence to uniquely and reliably identify an individual, yet they provide some beneficial evidence about the user identity and may improve the system performance. Here in this paper, we have proposed an approach for integrating the soft bio metrics with fingerprint and face to improve the performance of personal authentication system. In our approach we have proposed a combined architecture of three different sensors to elevate the system performance. The approach includes, soft bio metrics, fingerprint and face traits. We have also proven the efficiency of proposed system regarding FAR (False Acceptance Ratio) and total response time, with the help of MUBI (Multimodal Bio metrics Integration) software.

Keywords: FAR, minutiae point, multimodal bio metrics, primary bio metric, soft bio metric

Procedia PDF Downloads 317
1392 Bayes Estimation of Parameters of Binomial Type Rayleigh Class Software Reliability Growth Model using Non-informative Priors

Authors: Rajesh Singh, Kailash Kale

Abstract:

In this paper, the Binomial process type occurrence of software failures is considered and failure intensity has been characterized by one parameter Rayleigh class Software Reliability Growth Model (SRGM). The proposed SRGM is mathematical function of parameters namely; total number of failures i.e. η-0 and scale parameter i.e. η-1. It is assumed that very little or no information is available about both these parameters and then considering non-informative priors for both these parameters, the Bayes estimators for the parameters η-0 and η-1 have been obtained under square error loss function. The proposed Bayes estimators are compared with their corresponding maximum likelihood estimators on the basis of risk efficiencies obtained by Monte Carlo simulation technique. It is concluded that both the proposed Bayes estimators of total number of failures and scale parameter perform well for proper choice of execution time.

Keywords: binomial process, non-informative prior, maximum likelihood estimator (MLE), rayleigh class, software reliability growth model (SRGM)

Procedia PDF Downloads 364
1391 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 305
1390 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 45
1389 Information Disclosure And Financial Sentiment Index Using a Machine Learning Approach

Authors: Alev Atak

Abstract:

In this paper, we aim to create a financial sentiment index by investigating the company’s voluntary information disclosures. We retrieve structured content from BIST 100 companies’ financial reports for the period 1998-2018 and extract relevant financial information for sentiment analysis through Natural Language Processing. We measure strategy-related disclosures and their cross-sectional variation and classify report content into generic sections using synonym lists divided into four main categories according to their liquidity risk profile, risk positions, intra-annual information, and exposure to risk. We use Word Error Rate and Cosin Similarity for comparing and measuring text similarity and derivation in sets of texts. In addition to performing text extraction, we will provide a range of text analysis options, such as the readability metrics, word counts using pre-determined lists (e.g., forward-looking, uncertainty, tone, etc.), and comparison with reference corpus (word, parts of speech and semantic level). Therefore, we create an adequate analytical tool and a financial dictionary to depict the importance of granular financial disclosure for investors to identify correctly the risk-taking behavior and hence make the aggregated effects traceable.

Keywords: financial sentiment, machine learning, information disclosure, risk

Procedia PDF Downloads 71
1388 Achieving Success in NPD Projects

Authors: Ankush Agrawal, Nadia Bhuiyan

Abstract:

The new product development (NPD) literature emphasizes the importance of introducing new products on the market for continuing business success. New products are responsible for employment, economic growth, technological progress, and high standards of living. Therefore, the study of NPD and the processes through which they emerge is important. The goal of our research is to propose a framework of critical success factors, metrics, and tools and techniques for implementing metrics for each stage of the new product development (NPD) process. An extensive literature review was undertaken to investigate decades of studies on NPD success and how it can be achieved. These studies were scanned for common factors for firms that enjoyed success of new products on the market. The paper summarizes NPD success factors, suggests metrics that should be used to measure these factors, and proposes tools and techniques to make use of these metrics. This was done for each stage of the NPD process, and brought together in a framework that the authors propose should be followed for complex NPD projects. While many studies have been conducted on critical success factors for NPD, these studies tend to be fragmented and focus on one or a few phases of the NPD process.

Keywords: new product development, performance, critical success factors, framework

Procedia PDF Downloads 376