Search results for: HouseHold Registry Database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 859

Search results for: HouseHold Registry Database

739 A New Spectral-based Approach to Query-by-Humming for MP3 Songs Database

Authors: Leon Fu, Xiangyang Xue

Abstract:

In this paper, we propose a new approach to query-by-humming, focusing on MP3 songs database. Since MP3 songs are much more difficult in melody representation than symbolic performance data, we adopt to extract feature descriptors from the vocal sounds part of the songs. Our approach is based on signal filtering, sub-band spectral processing, MDCT coefficients analysis and peak energy detection by ignorance of the background music as much as possible. Finally, we apply dual dynamic programming algorithm for feature similarity matching. Experiments will show us its online performance in precision and efficiency.

Keywords: DP, MDCT, MP3, QBH.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
738 Information Retrieval: A Comparative Study of Textual Indexing Using an Oriented Object Database (db4o) and the Inverted File

Authors: Mohammed Erritali

Abstract:

The growth in the volume of text data such as books and articles in libraries for centuries has imposed to establish effective mechanisms to locate them. Early techniques such as abstraction, indexing and the use of classification categories have marked the birth of a new field of research called "Information Retrieval". Information Retrieval (IR) can be defined as the task of defining models and systems whose purpose is to facilitate access to a set of documents in electronic form (corpus) to allow a user to find the relevant ones for him, that is to say, the contents which matches with the information needs of the user. Most of the models of information retrieval use a specific data structure to index a corpus which is called "inverted file" or "reverse index". This inverted file collects information on all terms over the corpus documents specifying the identifiers of documents that contain the term in question, the frequency of each term in the documents of the corpus, the positions of the occurrences of the word... In this paper we use an oriented object database (db4o) instead of the inverted file, that is to say, instead to search a term in the inverted file, we will search it in the db4o database. The purpose of this work is to make a comparative study to see if the oriented object databases may be competing for the inverse index in terms of access speed and resource consumption using a large volume of data.

Keywords: Information Retrieval, indexation, oriented object database (db4o), inverted file.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1696
737 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
736 A Comparative Analysis of Financial Performance of Funded and Non-Funded Charity Organizations

Authors: Saunah Zainon, Ruhaya Atan, Yap Bee Wah, Zarina Abu Bakar

Abstract:

The primary objective of this study is to test whether there is any difference in performance between funded and nonfunded registered charity organizations. In this study, performance as the dependent variable is measured using total donations. Using a sample of 101 charity organizations registered with the Registry of Society, analysis of variance (ANOVA) results indicate that there is a difference in financial performance between funded and non-funded charity organizations. The study provides empirical evidence to resource providers and the policy makers in scrutinizing the decision to disburse their funds and resources to these charity organizations.

Keywords: charity organizations, donations, funded, non-funded

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
735 Discovery of Production Rules with Fuzzy Hierarchy

Authors: Fadl M. Ba-Alwi, Kamal K. Bharadwaj

Abstract:

In this paper a novel algorithm is proposed that integrates the process of fuzzy hierarchy generation and rule discovery for automated discovery of Production Rules with Fuzzy Hierarchy (PRFH) in large databases.A concept of frequency matrix (Freq) introduced to summarize large database that helps in minimizing the number of database accesses, identification and removal of irrelevant attribute values and weak classes during the fuzzy hierarchy generation.Experimental results have established the effectiveness of the proposed algorithm.

Keywords: Data Mining, Degree of subsumption, Freq matrix, Fuzzy hierarchy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
734 Towards an Extended SQLf: Bipolar Query Language with Preferences

Authors: L. Ludovic, R. Daniel, S-E Tbahriti

Abstract:

Database management systems that integrate user preferences promise better solution for personalization, greater flexibility and higher quality of query responses. This paper presents a tentative work that studies and investigates approaches to express user preferences in queries. We sketch an extend capabilities of SQLf language that uses the fuzzy set theory in order to define the user preferences. For that, two essential points are considered: the first concerns the expression of user preferences in SQLf by so-called fuzzy commensurable predicates set. The second concerns the bipolar way in which these user preferences are expressed on mandatory and/or optional preferences.

Keywords: Flexible query language, relational database, userpreference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 964
733 Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm

Authors: Jehad A. H. Hammad, Nur'Aini binti Abdul Rashid

Abstract:

With the rapid development in the field of life sciences and the flooding of genomic information, the need for faster and scalable searching methods has become urgent. One of the approaches that were investigated is indexing. The indexing methods have been categorized into three categories which are the lengthbased index algorithms, transformation-based algorithms and mixed techniques-based algorithms. In this research, we focused on the transformation based methods. We embedded the N-gram method into the transformation-based method to build an inverted index table. We then applied the parallel methods to speed up the index building time and to reduce the overall retrieval time when querying the genomic database. Our experiments show that the use of N-Gram transformation algorithm is an economical solution; it saves time and space too. The result shows that the size of the index is smaller than the size of the dataset when the size of N-Gram is 5 and 6. The parallel N-Gram transformation algorithm-s results indicate that the uses of parallel programming with large dataset are promising which can be improved further.

Keywords: Biological sequence, Database index, N-gram indexing, Parallel computing, Sequence retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
732 An Empirical Assessment of Sustainability of an Urban Water Supply Service Delivery

Authors: Olayinka Gafar Okeola, Akinola Muyiwa Moore

Abstract:

Urban population is rapidly increasing in Ilorin, (the capital of Kwara State of Nigeria) along with related increased water demand. The inadequacies of water supply services have forced the populace to depend on dug wells, boreholes, water tankers, street vendors etc. for their water needs. People spend hours daily carrying jerry can all around to collect and queue for water at the public water tap with high opportunity cost both in time and economic wastage. This situation motivated this study to assess the sustainability of an urban water supply services to unravel the factors undermining the effective delivery of services. Contingent Valuation Method was used to place value on water supply services using the Double Bounded Dichotomous Choice format for willingness to pay elicitation. A database was created with Microsoft Excel and Stata 12 Software to model and evaluate the variables that affect household willingness to pay. The results of the study reveal that about 92% of the total households surveyed were connected to the Government water supply out of which 87% reported that they were not satisfied with the existing services. The results furthered revealed that respondents are willing to pay ₦2500 monthly to enjoy sustainable water supply service delivery.

Keywords: Willingness-to-pay, contingent valuation method, Nigeria, service, delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 355
731 A study of Cancer-related MicroRNAs through Expression Data and Literature Search

Authors: Chien-Hung Huang, Chia-Wei Weng, Chang-Chih Chiang, Shih-Hua Wu, Chih-Hsien Huang, Ka-Lok Ng

Abstract:

MicroRNAs (miRNAs) are a class of non-coding RNAs that hybridize to mRNAs and induce either translation repression or mRNA cleavage. Recently, it has been reported that miRNAs could possibly play an important role in human diseases. By integrating miRNA target genes, cancer genes, miRNA and mRNA expression profiles information, a database is developed to link miRNAs to cancer target genes. The database provides experimentally verified human miRNA target genes information, including oncogenes and tumor suppressor genes. In addition, fragile sites information for miRNAs, and the strength of the correlation of miRNA and its target mRNA expression level for nine tissue types are computed, which serve as an indicator for suggesting miRNAs could play a role in human cancer. The database is freely accessible at http://ppi.bioinfo.asia.edu.tw/mirna_target/index.html.

Keywords: MicroRNA, miRNA expression profile, mRNAexpression profile, cancer genes, oncogene, tumor suppressor gene

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
730 Indoor Localization by Pattern Matching Method Based On Extended Database

Authors: Gyumin Hwang, Jihong Lee

Abstract:

This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.

Keywords: Chirp Spread Spectrum (CSS), Indoor Localization, Pattern-Matching, Time of Arrival (ToA), Multi-Path, Mahalanobis Distance, Reception Rate, Simultaneous Localization and Mapping (SLAM), Laser Range Finder (LRF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
729 Consumer Insolvency in the Czech Republic

Authors: Jindřiška Šedová

Abstract:

The Czech Republic is a country whose economy has undergone a transformation since 1989. Since joining the EU it has been striving to reduce the differences in its economic standard and the quality of its institutional environment in comparison with developed countries. According to an assessment carried out by the World Bank, the Czech Republic was long classed as a country whose institutional development was seen as problematic. For many years one of the things it was rated most poorly on was its bankruptcy law. The new Insolvency Act, which is a modern law in terms of its treatment of bankruptcy, was first adopted in the Czech Republic in 2006. This law, together with other regulatory measures, offers debtridden Czech economic subjects legal instruments which are well established and in common practice in developed market economies. Since then, analyses performed by the World Bank and the London EBRD have shown that there have been significant steps forward in the quality of Czech bankruptcy law. The Czech Republic still lacks an analytical apparatus which can offer a structured characterisation of the general and specific conditions of Czech company and household debt which is subject to current changes in the global economy. This area has so far not been given the attention it deserves. The lack of research is particularly clear as regards analysis of household debt and householders- ability to settle their debts in a reasonable manner using legal and other state means of regulation. We assume that Czech households have recourse to a modern insolvency law, yet the effective application of this law is hampered by the inconsistencies in the formal and informal institutions involved in resolving debt. This in turn is based on the assumption that this lack of consistency is more marked in cases of personal bankruptcy. Our aim is to identify the symptoms which indicate that for some time the effective application of bankruptcy law in the Czech Republic will be hindered by factors originating in householders- relative inability to identify the risks of falling into debt.

Keywords: bankruptcy law, household debt, consumer bankruptcy, business bankruptcy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
728 Survey on Image Mining Using Genetic Algorithm

Authors: Jyoti Dua

Abstract:

One image is worth more than thousand words. Images if analyzed can reveal useful information. Low level image processing deals with the extraction of specific feature from a single image. Now the question arises: What technique should be used to extract patterns of very large and detailed image database? The answer of the question is: “Image Mining”. Image Mining deals with the extraction of image data relationship, implicit knowledge, and another pattern from the collection of images or image database. It is nothing but the extension of Data Mining. In the following paper, not only we are going to scrutinize the current techniques of image mining but also present a new technique for mining images using Genetic Algorithm.

Keywords: Image Mining, Data Mining, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
727 Incidence of Disasters and Coping Mechanism among Farming Households in South West Nigeria

Authors: Fawehinmi Olabisi Alaba, Adeniyi O. R.

Abstract:

Farming households faces lots of disaster which contribute to endemic poverty. Anticipated increases in extreme weather events will exacerbate this. Primary data was administered to farming household using multi-stage random sampling technique. The result of the analysis shows that majority of the respondents (69.9%) are male, have mean household size, years of formal education and age of 5±1.14, 6±3.41, and 51.06±10.43 respectively. The major (48.9%) type of disaster experienced is flooding. Major coping mechanism adopted is sourcing for support from family and friends. Age, education, experience, access to extension agent, and mitigation control method contribute significantly to vulnerability to disaster. The major adaptation method (62.3%) is construction of drainage.

The study revealed that the coping mechanisms employed may become less effective as increasingly fragile livelihood systems struggle to withstand disaster shocks. Thus there is need for training of the farmers on measures to adapt to mitigate the shock from disasters

Keywords: Adaptation, Disasters, Flooding, Vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
726 Moving towards Zero Waste in a UK Local Authority Area: Challenges to the Introduction of Separate Food Waste Collections

Authors: C. Cole, M. Osmani, A. Wheatley, M. Quddus

Abstract:

EU and UK Government targets for minimising and recycling household waste has led the responsible authorities to research the alternatives to landfill. In the work reported here the local waste collection authority (Charnwood Borough Council) has adopted the aspirational strategy of becoming a “Zero Waste Borough” to lead the drive for public participation. The work concludes that the separate collection of food waste would be needed to meet the two regulatory standards on recycling and biologically active wastes.

An analysis of a neighbouring Authority (Newcastle-Under-Lyne Borough Council (NBC), a similar sized local authority that has a successful weekly food waste collection service was undertaken. Results indicate that the main challenges for Charnwood Borough Council would be gaining householder co-operation, the extra costs of collection and organising alternative treatment. The analysis also demonstrated that there was potential offset value via anaerobic digestion for CBC to overcome these difficulties and improve its recycling performance.

Keywords: England, Food Waste Collections, Household Waste, Local Authority.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1900
725 Data Transformation Services (DTS): Creating Data Mart by Consolidating Multi-Source Enterprise Operational Data

Authors: J. D. D. Daniel, K. N. Goh, S. M. Yusop

Abstract:

Trends in business intelligence, e-commerce and remote access make it necessary and practical to store data in different ways on multiple systems with different operating systems. As business evolve and grow, they require efficient computerized solution to perform data update and to access data from diverse enterprise business applications. The objective of this paper is to demonstrate the capability of DTS [1] as a database solution for automatic data transfer and update in solving business problem. This DTS package is developed for the sales of variety of plants and eventually expanded into commercial supply and landscaping business. Dimension data modeling is used in DTS package to extract, transform and load data from heterogeneous database systems such as MySQL, Microsoft Access and Oracle that consolidates into a Data Mart residing in SQL Server. Hence, the data transfer from various databases is scheduled to run automatically every quarter of the year to review the efficient sales analysis. Therefore, DTS is absolutely an attractive solution for automatic data transfer and update which meeting today-s business needs.

Keywords: Data Transformation Services (DTS), ObjectLinking and Embedding Database (OLEDB), Data Mart, OnlineAnalytical Processing (OLAP), Online Transactional Processing(OLTP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981
724 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
723 Enhanced Disk-Based Databases Towards Improved Hybrid In-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable inmemory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of diskbased database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of inmemory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: Concurrency control, disk-based databases, inmemory systems, enhanced memory access (EMA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
722 SQL Generator Based On MVC Pattern

Authors: Chanchai Supaartagorn

Abstract:

Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.

Keywords: MVC, relational database, SQL, White-Box testing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
721 Calculation of Methane Emissions from Wetlands in Slovakia via IPCC Methodology

Authors: Jozef Mindas, Jana Skvareninova

Abstract:

Wetlands are a main natural source of methane emissions, but they also represent the important biodiversity reservoirs in the landscape. There are about 26 thousands hectares of wetlands in Slovakia identified via the wetlands monitoring program. Created database of wetlands in Slovakia allows to analyze several ecological processes including also the methane emissions estimate. Based on the information from the database, the first estimate of the methane emissions from wetlands in Slovakia has been done. The IPCC methodology (Tier 1 approach) has been used with proposed emission factors for the ice-free period derived from the climatic data. The highest methane emissions of nearly 550 Gg are associated with the category of fens. Almost 11 Gg of methane is emitted from bogs, and emissions from flooded lands represent less than 8 Gg.

Keywords: Methane emissions, wetlands, bogs, fens, Slovakia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
720 A Self Configuring System for Object Recognition in Color Images

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a highly user-friendly tool.

Keywords: Automatic object recognition, clustering, content based image retrieval system, image segmentation, region adjacency graph, region grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
719 Water Demand Prediction for Touristic Mecca City in Saudi Arabia using Neural Networks

Authors: Abdel Hamid Ajbar, Emad Ali

Abstract:

Saudi Arabia is an arid country which depends on costly desalination plants to satisfy the growing residential water demand. Prediction of water demand is usually a challenging task because the forecast model should consider variations in economic progress, climate conditions and population growth. The task is further complicated knowing that Mecca city is visited regularly by large numbers during specific months in the year due to religious occasions. In this paper, a neural networks model is proposed to handle the prediction of the monthly and yearly water demand for Mecca city, Saudi Arabia. The proposed model will be developed based on historic records of water production and estimated visitors- distribution. The driving variables for the model include annuallyvarying variables such as household income, household density, and city population, and monthly-varying variables such as expected number of visitors each month and maximum monthly temperature.

Keywords: Water demand forecast; Neural Networks model; water resources management; Saudi Arabia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1772
718 Implementation of Geo-knowledge Based Geographic Information System for Estimating Earthquake Hazard Potential at a Metropolitan Area, Gwangju, in Korea

Authors: Chang-Guk Sun, Jin-Soo Shin

Abstract:

In this study, an inland metropolitan area, Gwangju, in Korea was selected to assess the amplification potential of earthquake motion and provide the information for regional seismic countermeasure. A geographic information system-based expert system was implemented for reliably predicting the spatial geotechnical layers in the entire region of interesting by building a geo-knowledge database. Particularly, the database consists of the existing boring data gathered from the prior geotechnical projects and the surface geo-knowledge data acquired from the site visit. For practical application of the geo-knowledge database to estimate the earthquake hazard potential related to site amplification effects at the study area, seismic zoning maps on geotechnical parameters, such as the bedrock depth and the site period, were created within GIS framework. In addition, seismic zonation of site classification was also performed to determine the site amplification coefficients for seismic design at any site in the study area. KeywordsEarthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Keywords: Earthquake hazard, geo-knowledge, geographic information system, seismic zonation, site period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
717 A Novel Framework for User-Friendly Ontology-Mediated Access to Relational Databases

Authors: Efthymios Chondrogiannis, Vassiliki Andronikou, Efstathios Karanastasis, Theodora Varvarigou

Abstract:

A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.

Keywords: Ontologies, Relational Databases, SPARQL, Web Interface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
716 A New Approach for Recoverable Timestamp Ordering Schedule

Authors: Hassan M. Najadat

Abstract:

A new approach for timestamp ordering problem in serializable schedules is presented. Since the number of users using databases is increasing rapidly, the accuracy and needing high throughput are main topics in database area. Strict 2PL does not allow all possible serializable schedules and so does not result high throughput. The main advantages of the approach are the ability to enforce the execution of transaction to be recoverable and the high achievable performance of concurrent execution in central databases. Comparing to Strict 2PL, the general structure of the algorithm is simple, free deadlock, and allows executing all possible serializable schedules which results high throughput. Various examples which include different orders of database operations are discussed.

Keywords: Concurrency control, schedule, timestamp, transaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
715 MONPAR - A Page Replacement Algorithm for a Spatiotemporal Database

Authors: U. Kalay, O. Kalıpsız

Abstract:

For a spatiotemporal database management system, I/O cost of queries and other operations is an important performance criterion. In order to optimize this cost, an intense research on designing robust index structures has been done in the past decade. With these major considerations, there are still other design issues that deserve addressing due to their direct impact on the I/O cost. Having said this, an efficient buffer management strategy plays a key role on reducing redundant disk access. In this paper, we proposed an efficient buffer strategy for a spatiotemporal database index structure, specifically indexing objects moving over a network of roads. The proposed strategy, namely MONPAR, is based on the data type (i.e. spatiotemporal data) and the structure of the index structure. For the purpose of an experimental evaluation, we set up a simulation environment that counts the number of disk accesses while executing a number of spatiotemporal range-queries over the index. We reiterated simulations with query sets with different distributions, such as uniform query distribution and skewed query distribution. Based on the comparison of our strategy with wellknown page-replacement techniques, like LRU-based and Prioritybased buffers, we conclude that MONPAR behaves better than its competitors for small and medium size buffers under all used query-distributions.

Keywords: Buffer Management, Spatiotemporal databases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1439
714 Business Rules for Data Warehouse

Authors: Rajeev Kaula

Abstract:

Business rules and data warehouse are concepts and technologies that impact a wide variety of organizational tasks. In general, each area has evolved independently, impacting application development and decision-making. Generating knowledge from data warehouse is a complex process. This paper outlines an approach to ease import of information and knowledge from a data warehouse star schema through an inference class of business rules. The paper utilizes the Oracle database for illustrating the working of the concepts. The star schema structure and the business rules are stored within a relational database. The approach is explained through a prototype in Oracle-s PL/SQL Server Pages.

Keywords: Business Rules, Data warehouse, PL/SQL ServerPages, Relational model, Web Application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2932
713 Thailand National Biodiversity Database System with webMathematica and Google Earth

Authors: W. Katsarapong, W. Srisang, K. Jaroensutasinee, M. Jaroensutasinee

Abstract:

National Biodiversity Database System (NBIDS) has been developed for collecting Thai biodiversity data. The goal of this project is to provide advanced tools for querying, analyzing, modeling, and visualizing patterns of species distribution for researchers and scientists. NBIDS data record two types of datasets: biodiversity data and environmental data. Biodiversity data are specie presence data and species status. The attributes of biodiversity data can be further classified into two groups: universal and projectspecific attributes. Universal attributes are attributes that are common to all of the records, e.g. X/Y coordinates, year, and collector name. Project-specific attributes are attributes that are unique to one or a few projects, e.g., flowering stage. Environmental data include atmospheric data, hydrology data, soil data, and land cover data collecting by using GLOBE protocols. We have developed webbased tools for data entry. Google Earth KML and ArcGIS were used as tools for map visualization. webMathematica was used for simple data visualization and also for advanced data analysis and visualization, e.g., spatial interpolation, and statistical analysis. NBIDS will be used by park rangers at Khao Nan National Park, and researchers.

Keywords: GLOBE protocol, Biodiversity, Database System, ArcGIS, Google Earth and webMathematica.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
712 Formation and Evaluation of Lahar/HDPE Hybrid Composite as a Structural Material for Household Biogas Digester

Authors: Lady Marianne E. Polinga, Candy C. Mercado, Camilo A. Polinga

Abstract:

This study was an investigation on the suitability of Lahar/HDPE composite as a primary material for low-cost smallscale biogas digesters. While sources of raw materials for biogas are abundant in the Philippines, cost of the technology has made the widespread utilization of this resource an indefinite proposition. Aside from capital economics, another problem arises with space requirements of current digester designs. These problems may be simultaneously addressed by fabricating digesters on a smaller, household scale to reach a wider market, and to use materials that may accommodate optimization of overall design and fabrication cost without sacrificing operational efficiency. This study involved actual fabrication of the Lahar/HDPE composite at varying composition and geometry, subsequent mechanical and thermal characterization, and implementation of Statistical Analysis to find intrinsic relationships between variables. From the results, Lahar/HDPE composite was found to be feasible for use as digester material from both mechanical and economic standpoints. 

Keywords: Biogas digester, Composite, High density polyethylene, Lahar.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
711 Object Recognition in Color Images by the Self Configuring System MEMORI

Authors: Michela Lecca

Abstract:

System MEMORI automatically detects and recognizes rotated and/or rescaled versions of the objects of a database within digital color images with cluttered background. This task is accomplished by means of a region grouping algorithm guided by heuristic rules, whose parameters concern some geometrical properties and the recognition score of the database objects. This paper focuses on the strategies implemented in MEMORI for the estimation of the heuristic rule parameters. This estimation, being automatic, makes the system a self configuring and highly user-friendly tool.

Keywords: Automatic Object Recognition, Clustering, Contentbased Image Retrieval System, Image Segmentation, Region Adjacency Graph, Region Grouping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1159
710 An Approach to Polynomial Curve Comparison in Geometric Object Database

Authors: Chanon Aphirukmatakun, Natasha Dejdumrong

Abstract:

In image processing and visualization, comparing two bitmapped images needs to be compared from their pixels by matching pixel-by-pixel. Consequently, it takes a lot of computational time while the comparison of two vector-based images is significantly faster. Sometimes these raster graphics images can be approximately converted into the vector-based images by various techniques. After conversion, the problem of comparing two raster graphics images can be reduced to the problem of comparing vector graphics images. Hence, the problem of comparing pixel-by-pixel can be reduced to the problem of polynomial comparisons. In computer aided geometric design (CAGD), the vector graphics images are the composition of curves and surfaces. Curves are defined by a sequence of control points and their polynomials. In this paper, the control points will be considerably used to compare curves. The same curves after relocated or rotated are treated to be equivalent while two curves after different scaled are considered to be similar curves. This paper proposed an algorithm for comparing the polynomial curves by using the control points for equivalence and similarity. In addition, the geometric object-oriented database used to keep the curve information has also been defined in XML format for further used in curve comparisons.

Keywords: Bezier curve, Said-Ball curve, Wang-Ball curve, DP curve, CAGD, comparison, geometric object database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2166