Search results for: Database index
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1714

Search results for: Database index

1714 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid

Abstract:

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079
1713 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
1712 Mining Sequential Patterns Using I-PrefixSpan

Authors: Dhany Saputra, Dayang R. A. Rambli, Oi Mean Foong

Abstract:

In this paper, we propose an improvement of pattern growth-based PrefixSpan algorithm, called I-PrefixSpan. The general idea of I-PrefixSpan is to use sufficient data structure for Seq-Tree framework and separator database to reduce the execution time and memory usage. Thus, with I-PrefixSpan there is no in-memory database stored after index set is constructed. The experimental result shows that using Java 2, this method improves the speed of PrefixSpan up to almost two orders of magnitude as well as the memory usage to more than one order of magnitude.

Keywords: ArrayList, ArrayIntList, minimum support, sequence database, sequential patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526
1711 Palmprint Recognition by Wavelet Transform with Competitive Index and PCA

Authors: Deepti Tamrakar, Pritee Khanna

Abstract:

This manuscript presents, palmprint recognition by combining different texture extraction approaches with high accuracy. The Region of Interest (ROI) is decomposed into different frequencytime sub-bands by wavelet transform up-to two levels and only the approximate image of two levels is selected, which is known as Approximate Image ROI (AIROI). This AIROI has information of principal lines of the palm. The Competitive Index is used as the features of the palmprint, in which six Gabor filters of different orientations convolve with the palmprint image to extract the orientation information from the image. The winner-take-all strategy is used to select dominant orientation for each pixel, which is known as Competitive Index. Further, PCA is applied to select highly uncorrelated Competitive Index features, to reduce the dimensions of the feature vector, and to project the features on Eigen space. The similarity of two palmprints is measured by the Euclidean distance metrics. The algorithm is tested on Hong Kong PolyU palmprint database. Different AIROI of different wavelet filter families are also tested with the Competitive Index and PCA. AIROI of db7 wavelet filter achievs Equal Error Rate (EER) of 0.0152% and Genuine Acceptance Rate (GAR) of 99.67% on the palm database of Hong Kong PolyU.

Keywords: DWT, EER, Euclidean Distance, Gabor filter, PCA, ROI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
1710 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
1709 Testing Database of Information System using Conceptual Modeling

Authors: Bogdan Walek, Cyril Klimes

Abstract:

This paper focuses on testing database of existing information system. At the beginning we describe the basic problems of implemented databases, such as data redundancy, poor design of database logical structure or inappropriate data types in columns of database tables. These problems are often the result of incorrect understanding of the primary requirements for a database of an information system. Then we propose an algorithm to compare the conceptual model created from vague requirements for a database with a conceptual model reconstructed from implemented database. An algorithm also suggests steps leading to optimization of implemented database. The proposed algorithm is verified by an implemented prototype. The paper also describes a fuzzy system which works with the vague requirements for a database of an information system, procedure for creating conceptual from vague requirements and an algorithm for reconstructing a conceptual model from implemented database.

Keywords: testing, database, relational database, information system, conceptual model, fuzzy, uncertain information, database testing, reconstruction, requirements, optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
1708 Hippocratic Database: A Privacy-Aware Database

Authors: Norjihan Abdul Ghani, Zailani Mohd Sidek

Abstract:

Nowadays, organizations and business has several motivating factors to protect an individual-s privacy. Confidentiality refers to type of sharing information to third parties. This is always referring to private information, especially for personal information that usually needs to keep as a private. Because of the important of privacy concerns today, we need to design a database system that suits with privacy. Agrawal et. al. has introduced Hippocratic Database also we refer here as a privacy-aware database. This paper will explain how HD can be a future trend for web-based application to enhance their privacy level of trustworthiness among internet users.

Keywords: Hippocratic database, privacy, privacy-aware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
1707 A Formal Implementation of Database Security

Authors: Yun Bai

Abstract:

This paper is to investigate the impplementation of security mechanism in object oriented database system. Formal methods plays an essential role in computer security due to its powerful expressiveness and concise syntax and semantics. In this paper, both issues of specification and implementation in database security environment will be considered; and the database security is achieved through the development of an efficient implementation of the specification without compromising its originality and expressiveness.

Keywords: database security, authorization policy, logic basedspecification

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1677
1706 A Methodology for Data Migration between Different Database Management Systems

Authors: Bogdan Walek, Cyril Klimes

Abstract:

In present days the area of data migration is very topical. Current tools for data migration in the area of relational database have several disadvantages that are presented in this paper. We propose a methodology for data migration of the database tables and their data between various types of relational database systems (RDBMS). The proposed methodology contains an expert system. The expert system contains a knowledge base that is composed of IFTHEN rules and based on the input data suggests appropriate data types of columns of database tables. The proposed tool, which contains an expert system, also includes the possibility of optimizing the data types in the target RDBMS database tables based on processed data of the source RDBMS database tables. The proposed expert system is shown on data migration of selected database of the source RDBMS to the target RDBMS.

Keywords: Expert system, fuzzy, data migration, database, relational database, data type, relational database management system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3405
1705 XML Data Management in Compressed Relational Database

Authors: Hongzhi Wang, Jianzhong Li, Hong Gao

Abstract:

XML is an important standard of data exchange and representation. As a mature database system, using relational database to support XML data may bring some advantages. But storing XML in relational database has obvious redundancy that wastes disk space, bandwidth and disk I/O when querying XML data. For the efficiency of storage and query XML, it is necessary to use compressed XML data in relational database. In this paper, a compressed relational database technology supporting XML data is presented. Original relational storage structure is adaptive to XPath query process. The compression method keeps this feature. Besides traditional relational database techniques, additional query process technologies on compressed relations and for special structure for XML are presented. In this paper, technologies for XQuery process in compressed relational database are presented..

Keywords: XML, compression, query processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
1704 Database Modelling Using WSML in the Specification of a Banking Application

Authors: Omid Sharifi, Member, ACM, Zeki Bayram, Member, ACM

Abstract:

We demonstrate through a sample application, Ebanking, that the Web Service Modelling Language Ontology component can be used as a very powerful object-oriented database design language with logic capabilities. Its conceptual syntax allows the definition of class hierarchies, and logic syntax allows the definition of constraints in the database. Relations, which are available for modelling relations of three or more concepts, can be connected to logical expressions, allowing the implicit specification of database content. Using a reasoning tool, logic queries can also be made against the database in simulation mode.

Keywords: Semantic web, ontology, E-banking, database, WSML, WSMO, E-R diagram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1888
1703 Data Migration between Document-Oriented and Relational Databases

Authors: Bogdan Walek, Cyril Klimes

Abstract:

Current tools for data migration between documentoriented and relational databases have several disadvantages. We propose a new approach for data migration between documentoriented and relational databases. During data migration the relational schema of the target (relational database) is automatically created from collection of XML documents. Proposed approach is verified on data migration between document-oriented database IBM Lotus/ Notes Domino and relational database implemented in relational database management system (RDBMS) MySQL.

Keywords: data migration, database, document-oriented database, XML, relational schema

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3468
1702 Auto Classification for Search Intelligence

Authors: Lilac A. E. Al-Safadi

Abstract:

This paper proposes an auto-classification algorithm of Web pages using Data mining techniques. We consider the problem of discovering association rules between terms in a set of Web pages belonging to a category in a search engine database, and present an auto-classification algorithm for solving this problem that are fundamentally based on Apriori algorithm. The proposed technique has two phases. The first phase is a training phase where human experts determines the categories of different Web pages, and the supervised Data mining algorithm will combine these categories with appropriate weighted index terms according to the highest supported rules among the most frequent words. The second phase is the categorization phase where a web crawler will crawl through the World Wide Web to build a database categorized according to the result of the data mining approach. This database contains URLs and their categories.

Keywords: Information Processing on the Web, Data Mining, Document Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
1701 A System to Integrate and Manipulate Protein Database Using BioPerl and XML

Authors: Zurinahni Zainol, Rosalina Abdul Salam, Rosni Abdullah, Nur'Aini, Wahidah Husain

Abstract:

The size, complexity and number of databases used for protein information have caused bioinformatics to lag behind in adapting to the need to handle this distributed information. Integrating all the information from different databases into one database is a challenging problem. Our main research is to develop a tool which can be used to access and manipulate protein information from difference databases. In our approach, we have integrated difference databases such as Swiss-prot, PDB, Interpro, and EMBL and transformed these databases in flat file format into relational form using XML and Bioperl. As a result, we showed this tool can search different sizes of protein information stored in relational database and the result can be retrieved faster compared to flat file database. A web based user interface is provided to allow user to access or search for protein information in the local database.

Keywords: Protein sequence database, relational database, integrated database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
1700 Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Keywords: Multiple Criteria Decision Making Analysis, MCDMA, Research Performance Evaluation, Scholar Index, h index, Science Citation Index, Science Efficiency, Cumulative Citation Index, Sciencemetrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 413
1699 The Extremal Graph with the Largest Merrifield-Simmons Index of (n, n + 2)-graphs

Authors: M. S. Haghighat, A. Dolati, M. Tabari, E. Mohseni

Abstract:

The Merrifield-Simmons index of a graph G is defined as the total number of its independent sets. A (n, n + 2)-graph is a connected simple graph with n vertices and n + 2 edges. In this paper we characterize the (n, n+2)-graph with the largest Merrifield- Simmons index. We show that its Merrifield-Simmons index i.e. the upper bound of the Merrifield-Simmons index of the (n, n+2)-graphs is 9 × 2n-5 +1 for n ≥ 5.

Keywords: Merrifield-Simmons index, (n, n+2)-graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1188
1698 A Middleware System between WEB and Database Servers

Authors: Mohammad H. Abu-Arqoub, Ihab S. Serhed, Waheeb A. Abu-Dawwas, Rashid M. Al-Azzeh

Abstract:

This paper aims at improving web server performance by establishing a middleware layer between web and database servers, which minimizes the overload on the database server. A middleware system has been developed as a service mainly to improve the performance. This system manages connection accesses in a way that would result in reducing the overload on the database server. In addition to the connection management, this system acts as an object-oriented model for best utilization of operating system resources. A web developer can use this Service Broker to improve web server performance.

Keywords: Database server, Improve performance, Middleware, Web server.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2364
1697 Programming Language Extension Using Structured Query Language for Database Access

Authors: Chapman Eze Nnadozie

Abstract:

Relational databases constitute a very vital tool for the effective management and administration of both personal and organizational data. Data access ranges from a single user database management software to a more complex distributed server system. This paper intends to appraise the use a programming language extension like structured query language (SQL) to establish links to a relational database (Microsoft Access 2013) using Visual C++ 9 programming language environment. The methodology used involves the creation of tables to form a database using Microsoft Access 2013, which is Object Linking and Embedding (OLE) database compliant. The SQL command is used to query the tables in the database for easy extraction of expected records inside the visual C++ environment. The findings of this paper reveal that records can easily be accessed and manipulated to filter exactly what the user wants, such as retrieval of records with specified criteria, updating of records, and deletion of part or the whole records in a table.

Keywords: Data access, database, database management system, OLE, programming language, records, relational database, software, SQL, table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 675
1696 A Logic Approach to Database Dynamic Updating

Authors: Daniel Stamate

Abstract:

We introduce a logic-based framework for database updating under constraints. In our framework, the constraints are represented as an instantiated extended logic program. When performing an update, database consistency may be violated. We provide an approach of maintaining database consistency, and study the conditions under which the maintenance process is deterministic. We show that the complexity of the computations and decision problems presented in our framework is in each case polynomial time.

Keywords: Databases, knowledge bases, constraints, updates, minimal change, consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
1695 Database Placement on Large-Scale Systems

Authors: Cherif Haddad, Faouzi Ben Charrada

Abstract:

Large-scale systems such as Grids offer infrastructures for both data distribution and parallel processing. The use of Grid infrastructures is a more recent issue that is already impacting the Distributed Database Management System industry. In DBMS, distributed query processing has emerged as a fundamental technique for ensuring high performance in distributed databases. Database placement is particularly important in large-scale systems because it reduces communication costs and improves resource usage. In this paper, we propose a dynamic database placement policy that depends on query patterns and Grid sites capabilities. We evaluate the performance of the proposed database placement policy using simulations. The obtained results show that dynamic database placement can significantly improve the performance of distributed query processing.

Keywords: Large-scale systems, Grid environment, Distributed Databases, Distributed query processing, Database placement

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1448
1694 Estimation of the Drought Index Based on the Climatic Projections of Precipitation of the Uruguay River Basin

Authors: José Leandro Melgar Néris, Claudinéia Brazil, Luciane Teresa Salvi, Isabel Cristina Damin

Abstract:

The impact the climate change is not recent, the main variable in the hydrological cycle is the sequence and shortage of a drought, which has a significant impact on the socioeconomic, agricultural and environmental spheres. This study aims to characterize and quantify, based on precipitation climatic projections, the rainy and dry events in the region of the Uruguay River Basin, through the Standardized Precipitation Index (SPI). The database is the image that is part of the Intercomparison of Model Models, Phase 5 (CMIP5), which provides condition prediction models, organized according to the Representative Routes of Concentration (CPR). Compared to the normal set of climates in the Uruguay River Watershed through precipitation projections, seasonal precipitation increases for all proposed scenarios, with a low climate trend. From the data of this research, the idea is that this article can be used to support research and the responsible bodies can use it as a subsidy for mitigation measures in other hydrographic basins.

Keywords: Drought index, climatic projections, precipitation of the Uruguay River Basin, Standardized Precipitation Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 511
1693 A Fast Adaptive Content-based Retrieval System of Satellite Images Database using Relevance Feedback

Authors: Hanan Mahmoud Ezzat Mahmoud, Alaa Abd El Fatah Hefnawy

Abstract:

In this paper, we present a system for content-based retrieval of large database of classified satellite images, based on user's relevance feedback (RF).Through our proposed system, we divide each satellite image scene into small subimages, which stored in the database. The modified radial basis functions neural network has important role in clustering the subimages of database according to the Euclidean distance between the query feature vector and the other subimages feature vectors. The advantage of using RF technique in such queries is demonstrated by analyzing the database retrieval results.

Keywords: content-based image retrieval, large database of image, RBF neural net, relevance feedback

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430
1692 Comparison of the H-Index of Researchers of Google Scholar and Scopus

Authors: Adian Fatchur Rochim, Abdul Muis, Riri Fitri Sari

Abstract:

H-index has been widely used as a performance indicator of researchers around the world especially in Indonesia. The Government uses Scopus and Google scholar as indexing references in providing recognition and appreciation. However, those two indexing services yield to different H-index values. For that purpose, this paper evaluates the difference of the H-index from those services. Researchers indexed by Webometrics, are used as reference’s data in this paper. Currently, Webometrics only uses H-index from Google Scholar. This paper observed and compared corresponding researchers’ data from Scopus to get their H-index score. Subsequently, some researchers with huge differences in score are observed in more detail on their paper’s publisher. This paper shows that the H-index of researchers in Google Scholar is approximately 2.45 times of their Scopus H-Index. Most difference exists due to the existence of uncertified publishers, which is considered in Google Scholar but not in Scopus.

Keywords: Google Scholar, H-index, Scopus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
1691 A Survey on Life Science Database Citation Frequency in Scientific Literatures

Authors: Hendry Muljadi, Jiro Araki, Satoru Miyazaki, Asao Fujiyama

Abstract:

There are so many databases of various fields of life sciences available online. To find well-used databases, a survey to measure life science database citation frequency in scientific literatures is done. The survey is done by measuring how many scientific literatures which are available on PubMed Central archive cited a specific life science database. This paper presents and discusses the results of the survey.

Keywords: Life science, database, metadatabase, PubMedCentral.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
1690 An Integrated Biotechnology Database of the National Agricultural Information Center in Korea

Authors: Chang Kug Kim, Dong Suk Park, Young Joo Seol, Jang Ho Hahn

Abstract:

The National Agricultural Biotechnology Information Center (NABIC) plays a leading role in the biotechnology information database for agricultural plants in Korea. Since 2002, we have concentrated on functional genomics of major crops, building an integrated biotechnology database for agro-biotech information that focuses on bioinformatics of major agricultural resources such as rice, Chinese cabbage, and microorganisms. In the NABIC, integration-based biotechnology database provides useful information through a user-friendly web interface that allows analysis of genome infrastructure, multiple plants, microbial resources, and living modified organisms.

Keywords: biotechnology, database, genome information

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2504
1689 Data Migration Methodology from Relational to NoSQL Databases

Authors: Mohamed Hanine, Abdesadik Bendarag, Omar Boutkhoum

Abstract:

Currently, the field of data migration is very topical. As the number of applications developed rapidly, the ever-increasing volume of data collected has driven the architectural migration from Relational Database Management System (RDBMS) to NoSQL (Not Only SQL) database. This very recent technology is important enough in the field of database management. The main aim of this paper is to present a methodology for data migration from RDBMS to NoSQL database. To illustrate this methodology, we implement a software prototype using MySQL as a RDBMS and MongoDB as a NoSQL database. Although this is a hard engineering work, our results show that the proposed methodology can successfully accomplish the goal of this study.

Keywords: Data Migration, MySQL, RDBMS, NoSQL, MongoDB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4304
1688 A New Color Image Database for Benchmarking of Automatic Face Detection and Human Skin Segmentation Techniques

Authors: Abdallah S. Abdallah, Mohamad A bou El-Nasr, A. Lynn Abbott

Abstract:

This paper presents a new color face image database for benchmarking of automatic face detection algorithms and human skin segmentation techniques. It is named the VT-AAST image database, and is divided into four parts. Part one is a set of 286 color photographs that include a total of 1027 faces in the original format given by our digital cameras, offering a wide range of difference in orientation, pose, environment, illumination, facial expression and race. Part two contains the same set in a different file format. The third part is a set of corresponding image files that contain human colored skin regions resulting from a manual segmentation procedure. The fourth part of the database has the same regions converted into grayscale. The database is available on-line for noncommercial use. In this paper, descriptions of the database development, organization, format as well as information needed for benchmarking of algorithms are depicted in detail.

Keywords: Image database, color image analysis, facedetection, skin segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
1687 The Assessment of Reforms in Different Countries by Social-Economic Development Integral Index

Authors: Samson Davoyan, Tatevik Sahakyan

Abstract:

The purpose of this report is to suggest a new methodology for the assessment of the comparative efficiency of the reforms made in different countries by an integral index. We have highlighted the reforms made in post-crisis period in 21 former socialist countries. The integral index describes the social-economic development level. The integral index contains of six indexes: The Global Competitiveness Index, Doing Business, The Corruption Perception, The Index of Economic Freedom, The Human Development, and The Democracy Index, which are reported by different international organizations. With the help of our methodology we first summarized the above-mentioned 6 indexes and attained 1 general index, besides, our new method enables us to assess the comparative efficiency of the reforms made in different countries by analyzing them. The purpose is to reveal the opportunities and threats of socialeconomic reforms in different directions.

Keywords: Assessment, comparative, effectiveness, reforms

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
1686 The Strange Relationship between Literacy and Well-Being: The Results of an International Survey with Special Focus on Italy

Authors: Federica Cornali

Abstract:

Does education matter to the quality of our life? The results of extensive studies offer an affirmative answer to this question: high education levels are positively associated with higher income, with more highly qualified professions, with lower risk of unemployment, with better physical health and also, it is said, with more happiness. However, exploring these relationships is far from straightforward. Aside from educational credentials, what properties distinguish functionally literate individuals? How can their personal level of satisfaction be measured? What are the social mechanisms whereby education affects well-being?Using a literacy index and several measures for well-being developed by secondary analysis of the Adult Literacy and Life Skills Survey database, this investigation examined the relationship between literacy skills and subjective wellbeing in several OECD (Organisation for Economic Co-operation and Development) countries. Special attention was been addressed to Italy, and in particular to two regions representing territorial differences in this country: Piedmont and Campania.

Keywords: Cultural Divide, Literacy Index, Life Satisfaction, Subjective Well-being Index

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3263
1685 Implementing a Database from a Requirement Specification

Authors: M. Omer, D. Wilson

Abstract:

Creating a database scheme is essentially a manual process. From a requirement specification the information contained within has to be analyzed and reduced into a set of tables, attributes and relationships. This is a time consuming process that has to go through several stages before an acceptable database schema is achieved. The purpose of this paper is to implement a Natural Language Processing (NLP) based tool to produce a relational database from a requirement specification. The Stanford CoreNLP version 3.3.1 and the Java programming were used to implement the proposed model. The outcome of this study indicates that a first draft of a relational database schema can be extracted from a requirement specification by using NLP tools and techniques with minimum user intervention. Therefore this method is a step forward in finding a solution that requires little or no user intervention.

Keywords: Information Extraction, Natural Language Processing, Relation Extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2163