Search results for: Multimedia database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 885

Search results for: Multimedia database

735 Increasing Profitability Supported by Innovative Methods and Designing Monitoring Software in Condition-Based Maintenance: A Case Study

Authors: Nasrin Farajiparvar

Abstract:

In the present article, a new method has been developed to enhance the application of equipment monitoring, which in turn results in improving condition-based maintenance economic impact in an automobile parts manufacturing factory. This study also describes how an effective software with a simple database can be utilized to achieve cost-effective improvements in maintenance performance. The most important results of this project are indicated here: 1. 63% reduction in direct and indirect maintenance costs. 2. Creating a proper database to analyse failures. 3. Creating a method to control system performance and develop it to similar systems. 4. Designing a software to analyse database and consequently create technical knowledge to face unusual condition of the system. Moreover, the results of this study have shown that the concept and philosophy of maintenance has not been understood in most Iranian industries. Thus, more investment is strongly required to improve maintenance conditions.

Keywords: Condition-based maintenance, Economic savings, Iran industries, Machine life prediction software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
734 Improved Weighted Matching for Speaker Recognition

Authors: Ozan Mut, Mehmet Göktürk

Abstract:

Matching algorithms have significant importance in speaker recognition. Feature vectors of the unknown utterance are compared to feature vectors of the modeled speakers as a last step in speaker recognition. A similarity score is found for every model in the speaker database. Depending on the type of speaker recognition, these scores are used to determine the author of unknown speech samples. For speaker verification, similarity score is tested against a predefined threshold and either acceptance or rejection result is obtained. In the case of speaker identification, the result depends on whether the identification is open set or closed set. In closed set identification, the model that yields the best similarity score is accepted. In open set identification, the best score is tested against a threshold, so there is one more possible output satisfying the condition that the speaker is not one of the registered speakers in existing database. This paper focuses on closed set speaker identification using a modified version of a well known matching algorithm. The results of new matching algorithm indicated better performance on YOHO international speaker recognition database.

Keywords: Automatic Speaker Recognition, Voice Recognition, Pattern Recognition, Digital Audio Signal Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
733 A New Spectral-based Approach to Query-by-Humming for MP3 Songs Database

Authors: Leon Fu, Xiangyang Xue

Abstract:

In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.

Keywords: DP, MDCT, MP3, QBH.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
732 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
731 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
730 Component Based Framework for Authoring and Multimedia Training in Mathematics

Authors: Ion Smeureanu, Marian Dardala, Adriana Reveiu

Abstract:

The new programming technologies allow for the creation of components which can be automatically or manually assembled to reach a new experience in knowledge understanding and mastering or in getting skills for a specific knowledge area. The project proposes an interactive framework that permits the creation, combination and utilization of components that are specific to mathematical training in high schools. The main framework-s objectives are: • authoring lessons by the teacher or the students; all they need are simple operating skills for Equation Editor (or something similar, or Latex); the rest are just drag & drop operations, inserting data into a grid, or navigating through menus • allowing sonorous presentations of mathematical texts and solving hints (easier understood by the students) • offering graphical representations of a mathematical function edited in Equation • storing of learning objects in a database • storing of predefined lessons (efficient for expressions and commands, the rest being calculations; allows a high compression) • viewing and/or modifying predefined lessons, according to the curricula The whole thing is focused on a mathematical expressions minicompiler, storing the code that will be later used for different purposes (tables, graphics, and optimisations). Programming technologies used. A Visual C# .NET implementation is proposed. New and innovative digital learning objects for mathematics will be developed; they are capable to interpret, contextualize and react depending on the architecture where they are assembled.

Keywords: Adaptor, automatic assembly learning component and user control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1658
729 Discovery of Production Rules with Fuzzy Hierarchy

Authors: Fadl M. Ba-Alwi, Kamal K. Bharadwaj

Abstract:

In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.

Keywords: Data Mining, Degree of subsumption, Freq matrix, Fuzzy hierarchy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
728 Hacking the Spatial Limitations in Bridging Virtual and Traditional Teaching Methodologies in Sri Lanka

Authors: Manuela Nayantara Jeyaraj

Abstract:

Having moved into the 21st century, it is way past being arguable that innovative technology needs to be incorporated into conventional classroom teaching. Though the Western world has found presumable success in achieving this, it is still a concept under battle in developing countries such as Sri Lanka. Reaching the acme of implementing interactive virtual learning within classrooms is a struggling idealistic fascination within the island. In order to overcome this problem, this study is set to reveal facts that limit the implementation of virtual, interactive learning within the school classrooms and provide hacks that could prove the augmented use of the Virtual World to enhance teaching and learning experiences. As each classroom moves along with the usage of technology to fulfill its functionalities, a few intense hacks provided will build the administrative onuses on a virtual system. These hacks may divulge barriers based on social conventions, financial boundaries, digital literacy, intellectual capacity of the staff, and highlight the impediments in introducing students to an interactive virtual learning environment and thereby provide the necessary actions or changes to be made to succeed and march along in creating an intellectual society built on virtual learning and lifestyle. This digital learning environment will be composed of multimedia presentations, trivia and pop quizzes conducted on a GUI, assessments conducted via a virtual system, records maintained on a database, etc. The ultimate objective of this study could enhance every child's basic learning environment; hence, diminishing the digital divide that exists in certain communities.

Keywords: Digital divide, digital learning, digitization, Sri Lanka, teaching methodologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
727 Towards an Extended SQLf: Bipolar Query Language with Preferences

Authors: L. Ludovic, R. Daniel, S-E Tbahriti

Abstract:

Database management systems that integrate user preferences promise better solution for personalization, greater flexibility and higher quality of query responses. This paper presents a tentative work that studies and investigates approaches to express user preferences in queries. We sketch an extend capabilities of SQLf language that uses the fuzzy set theory in order to define the user preferences. For that, two essential points are considered: the first concerns the expression of user preferences in SQLf by so-called fuzzy commensurable predicates set. The second concerns the bipolar way in which these user preferences are expressed on mandatory and/or optional preferences.

Keywords: Flexible query language, relational database, userpreference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 964
726 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid

Abstract:

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
725 A study of Cancer-related MicroRNAs through Expression Data and Literature Search

Authors: Chien-Hung Huang, Chia-Wei Weng, Chang-Chih Chiang, Shih-Hua Wu, Chih-Hsien Huang, Ka-Lok Ng

Abstract:

MicroRNAs (miRNAs) are a class of non-coding RNAs that hybridize to mRNAs and induce either translation repression or mRNA cleavage. Recently, it has been reported that miRNAs could possibly play an important role in human diseases. By integrating miRNA target genes, cancer genes, miRNA and mRNA expression profiles information, a database is developed to link miRNAs to cancer target genes. The database provides experimentally verified human miRNA target genes information, including oncogenes and tumor suppressor genes. In addition, fragile sites information for miRNAs, and the strength of the correlation of miRNA and its target mRNA expression level for nine tissue types are computed, which serve as an indicator for suggesting miRNAs could play a role in human cancer. The database is freely accessible at http://ppi.bioinfo.asia.edu.tw/mirna_target/index.html.

Keywords: MicroRNA, miRNA expression profile, mRNAexpression profile, cancer genes, oncogene, tumor suppressor gene

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
724 Indoor Localization by Pattern Matching Method Based On Extended Database

Authors: Gyumin Hwang, Jihong Lee

Abstract:

This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.

Keywords: Chirp Spread Spectrum (CSS), Indoor Localization, Pattern-Matching, Time of Arrival (ToA), Multi-Path, Mahalanobis Distance, Reception Rate, Simultaneous Localization and Mapping (SLAM), Laser Range Finder (LRF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
723 Survey on Image Mining Using Genetic Algorithm

Authors: Jyoti Dua

Abstract:

One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.

Keywords: Image Mining, Data Mining, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
722 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
721 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
720 Enhanced Disk-Based Databases Towards Improved Hybrid In-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable inmemory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of diskbased database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of inmemory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: Concurrency control, disk-based databases, inmemory systems, enhanced memory access (EMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
719 SQL Generator Based On MVC Pattern

Authors: Chanchai Supaartagorn

Abstract:

Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.

Keywords: MVC, relational database, SQL, White-Box testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
718 Calculation of Methane Emissions from Wetlands in Slovakia via IPCC Methodology

Authors: Jozef Mindas, Jana Skvareninova

Abstract:

Wetlands are a main natural source of methane emissions, but they also represent the important biodiversity reservoirs in the landscape. There are about 26 thousands hectares of wetlands in Slovakia identified via the wetlands monitoring program. Created database of wetlands in Slovakia allows to analyze several ecological processes including also the methane emissions estimate. Based on the information from the database, the first estimate of the methane emissions from wetlands in Slovakia has been done. The IPCC methodology (Tier 1 approach) has been used with proposed emission factors for the ice-free period derived from the climatic data. The highest methane emissions of nearly 550 Gg are associated with the category of fens. Almost 11 Gg of methane is emitted from bogs, and emissions from flooded lands represent less than 8 Gg.

Keywords: Methane emissions, wetlands, bogs, fens, Slovakia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
717 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086
716 A Self Configuring System for Object Recognition in Color Images

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a highly user-friendly tool.

Keywords: Automatic object recognition, clustering, content based image retrieval system, image segmentation, region adjacency graph, region grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
715 Implementation of Geo-knowledge Based Geographic Information System for Estimating Earthquake Hazard Potential at a Metropolitan Area, Gwangju, in Korea

Authors: Chang-Guk Sun, Jin-Soo Shin

Abstract:

In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Keywords: Earthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
714 A Novel Framework for User-Friendly Ontology-Mediated Access to Relational Databases

Authors: Efthymios Chondrogiannis, Vassiliki Andronikou, Efstathios Karanastasis, Theodora Varvarigou

Abstract:

A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.

Keywords: Ontologies, Relational Databases, SPARQL, Web Interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
713 A New Approach for Recoverable Timestamp Ordering Schedule

Authors: Hassan M. Najadat

Abstract:

A new approach for timestamp ordering problem in serializable schedules is presented. Since the number of users using databases is increasing rapidly, the accuracy and needing high throughput are main topics in database area. Strict 2PL does not allow all possible serializable schedules and so does not result high throughput. The main advantages of the approach are the ability to enforce the execution of transaction to be recoverable and the high achievable performance of concurrent execution in central databases. Comparing to Strict 2PL, the general structure of the algorithm is simple, free deadlock, and allows executing all possible serializable schedules which results high throughput. Various examples which include different orders of database operations are discussed.

Keywords: Concurrency control, schedule, timestamp, transaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
712 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
711 Business Rules for Data Warehouse

Authors: Rajeev Kaula

Abstract:

Business rules and data warehouse are concepts and technologies that impact a wide variety of organizational tasks. In general, each area has evolved independently, impacting application development and decision-making. Generating knowledge from data warehouse is a complex process. This paper outlines an approach to ease import of information and knowledge from a data warehouse star schema through an inference class of business rules. The paper utilizes the Oracle database for illustrating the working of the concepts. The star schema structure and the business rules are stored within a relational database. The approach is explained through a prototype in Oracle-s PL/SQL Server Pages.

Keywords: Business Rules, Data warehouse, PL/SQL ServerPages, Relational model, Web Application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2932
710 Thailand National Biodiversity Database System with webMathematica and Google Earth

Authors: W. Katsarapong, W. Srisang, K. Jaroensutasinee, M. Jaroensutasinee

Abstract:

National Biodiversity Database System (NBIDS) has been developed for collecting Thai biodiversity data. The goal of this project is to provide advanced tools for querying, analyzing, modeling, and visualizing patterns of species distribution for researchers and scientists. NBIDS data record two types of datasets: biodiversity data and environmental data. Biodiversity data are specie presence data and species status. The attributes of biodiversity data can be further classified into two groups: universal and projectspecific attributes. Universal attributes are attributes that are common to all of the records, e.g. X/Y coordinates, year, and collector name. Project-specific attributes are attributes that are unique to one or a few projects, e.g., flowering stage. Environmental data include atmospheric data, hydrology data, soil data, and land cover data collecting by using GLOBE protocols. We have developed webbased tools for data entry. Google Earth KML and ArcGIS were used as tools for map visualization. webMathematica was used for simple data visualization and also for advanced data analysis and visualization, e.g., spatial interpolation, and statistical analysis. NBIDS will be used by park rangers at Khao Nan National Park, and researchers.

Keywords: GLOBE protocol, Biodiversity, Database System, ArcGIS, Google Earth and webMathematica.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
709 Object Recognition in Color Images by the Self Configuring System MEMORI

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a self configuring and highly user-friendly tool.

Keywords: Automatic Object Recognition, Clustering, Contentbased Image Retrieval System, Image Segmentation, Region Adjacency Graph, Region Grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1158
708 An Approach to Polynomial Curve Comparison in Geometric Object Database

Authors: Chanon Aphirukmatakun, Natasha Dejdumrong

Abstract:

In image processing and visualization, comparing two bitmapped images needs to be compared from their pixels by matching pixel-by-pixel. Consequently, it takes a lot of computational time while the comparison of two vector-based images is significantly faster. Sometimes these raster graphics images can be approximately converted into the vector-based images by various techniques. After conversion, the problem of comparing two raster graphics images can be reduced to the problem of comparing vector graphics images. Hence, the problem of comparing pixel-by-pixel can be reduced to the problem of polynomial comparisons. In computer aided geometric design (CAGD), the vector graphics images are the composition of curves and surfaces. Curves are defined by a sequence of control points and their polynomials. In this paper, the control points will be considerably used to compare curves. The same curves after relocated or rotated are treated to be equivalent while two curves after different scaled are considered to be similar curves. This paper proposed an algorithm for comparing the polynomial curves by using the control points for equivalence and similarity. In addition, the geometric object-oriented database used to keep the curve information has also been defined in XML format for further used in curve comparisons.

Keywords: Bezier curve, Said-Ball curve, Wang-Ball curve, DP curve, CAGD, comparison, geometric object database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166
707 PP-FSM: Peer to Peer File Share for Multimedia

Authors: Arsalan Ali Shah, Zafar I. Malik, Shaukat Ali

Abstract:

Peer-to-Peer (P2P) is a self-organizing resource sharing network with no centralized authority or infrastructure, which makes it unpredictable and vulnerable. In this paper, we propose architecture to make the peer-to-peer network more centralized, predictable, and safer to use by implementing trust and stopping free riding.

Keywords: File Share, Free Riding, Peer-to-Peer, Trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1988
706 Effect of Network Communication Overhead on the Performance of Adaptive Speculative Locking Protocol

Authors: Waqar Haque, Pai Qi

Abstract:

The speculative locking (SL) protocol extends the twophase locking (2PL) protocol to allow for parallelism among conflicting transactions. The adaptive speculative locking (ASL) protocol provided further enhancements and outperformed SL protocols under most conditions. Neither of these protocols consider the impact of network latency on the performance of the distributed database systems. We have studied the performance of ASL protocol taking into account the communication overhead. The results indicate that though system load can counter network latency, it can still become a bottleneck in many situations. The impact of latency on performance depends on many factors including the system resources. A flexible discrete event simulator was used as the testbed for this study.

Keywords: concurrency control, distributed database systems, speculative locking

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656