Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25

Database Related Publications

25 A Comparative Study of GTC and PSP Algorithms for Mining Sequential Patterns Embedded in Database with Time Constraints

Authors: Safa Adi

Abstract:

This paper will consider the problem of sequential mining patterns embedded in a database by handling the time constraints as defined in the GSP algorithm (level wise algorithms). We will compare two previous approaches GTC and PSP, that resumes the general principles of GSP. Furthermore this paper will discuss PG-hybrid algorithm, that using PSP and GTC. The results show that PSP and GTC are more efficient than GSP. On the other hand, the GTC algorithm performs better than PSP. The PG-hybrid algorithm use PSP algorithm for the two first passes on the database, and GTC approach for the following scans. Experiments show that the hybrid approach is very efficient for short, frequent sequences.

Keywords: Database, GTC algorithm, PSP algorithm, time constraints, sequential patterns

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267
24 Programming Language Extension Using Structured Query Language for Database Access

Authors: Chapman Eze Nnadozie

Abstract:

Relational databases constitute a very vital tool for the effective management and administration of both personal and organizational data. Data access ranges from a single user database management software to a more complex distributed server system. This paper intends to appraise the use a programming language extension like structured query language (SQL) to establish links to a relational database (Microsoft Access 2013) using Visual C++ 9 programming language environment. The methodology used involves the creation of tables to form a database using Microsoft Access 2013, which is Object Linking and Embedding (OLE) database compliant. The SQL command is used to query the tables in the database for easy extraction of expected records inside the visual C++ environment. The findings of this paper reveal that records can easily be accessed and manipulated to filter exactly what the user wants, such as retrieval of records with specified criteria, updating of records, and deletion of part or the whole records in a table.

Keywords: software, Database, Database management system, Programming Language, data access, sql, relational database, table, records, OLE

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 281
23 Automation of Web-Portal Construction Processes with SQL Server for the Black Sea Ecosystem Monitoring

Authors: Nino Topuria, Gia Surguladze, Ana Gavardashvili, Tsatsa Namchevadze

Abstract:

The present article discusses design and development of Information System for monitoring ecology within the Black Sea basin of Georgia. Sea parameters, river, estuary, vulnerable district, water sample, etc. were considered as the major parameters of the sea ecosystem. A conceptual schema has been developed for the Black Sea ecosystem based on object-role model. The experimental database for the Black Sea ecosystem has been constructed using Ms SQL Server, while the object-role model NORMA has been developed using graphical instrument Ms Visual Studio within the integrated environment of .NET Framework 4.5. Web portal has been designed based on Ms SharePoint Server. The server database connection with web-portal has been carried out by means of External List of Ms SharePoint Server Designer.

Keywords: Ecology, Database, Service-Oriented Architecture, Monitoring System, river, estuary, Black Sea, Web-Application, object-role modelling, SharePoint, automation of data processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 414
22 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Teodiano Bastos, Mariana Lima, Rodrigo Silva, Victor Stange

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: Database, Genetic Analysis, Forensic genetics, software solution, sample management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 648
21 Image Features Comparison-Based Position Estimation Method Using a Camera Sensor

Authors: Jinseon Song, Yongwan Park

Abstract:

In this paper, propose method that can user’s position that based on database is built from single camera. Previous positioning calculate distance by arrival-time of signal like GPS (Global Positioning System), RF(Radio Frequency). However, these previous method have weakness because these have large error range according to signal interference. Method for solution estimate position by camera sensor. But, signal camera is difficult to obtain relative position data and stereo camera is difficult to provide real-time position data because of a lot of image data, too. First of all, in this research we build image database at space that able to provide positioning service with single camera. Next, we judge similarity through image matching of database image and transmission image from user. Finally, we decide position of user through position of most similar database image. For verification of propose method, we experiment at real-environment like indoor and outdoor. Propose method is wide positioning range and this method can verify not only position of user but also direction.

Keywords: Database, Distance, estimation, Positioning, camera, features, SURF (Speed-Up Robust Features)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
20 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: Sensor, Database, electricity sub-meters, energy anomaly detection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
19 Modernization of the Economic Price Adjustment Software

Authors: Roger L Goodwin

Abstract:

The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for longterm contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.

Keywords: Database, Contracts, Forms, consumer price index, Economic Price Adjustment, visualization tools, reports, event procedures

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
18 Statistical Estimation of Spring-back Degree Using Texture Database

Authors: Takashi Sakai, Jun-Ichi Koyama, Shinsaku Kikuta

Abstract:

Using a texture database, a statistical estimation of spring-back was conducted in this study on the basis of statistical analysis. Both spring-back in bending deformation and experimental data related to the crystal orientation show significant dispersion. Therefore, a probabilistic statistical approach was established for the proper quantification of these values. Correlation was examined among the parameters F(x) of spring-back, F(x) of the buildup fraction to three orientations after 92° bending, and F(x) at an as-received part on the basis of the three-parameter Weibull distribution. Consequent spring-back estimation using a texture database yielded excellent estimates compared with experimental values.

Keywords: Database, Texture, Bending, Weibull distribution, spring-back, Crystallographic Orientation, SEM-EBSD, Statistical analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399
17 Obstacle Classification Method Based On 2D LIDAR Database

Authors: Moohyun Lee, Soojung Hur, Yongwan Park

Abstract:

We propose obstacle classification method based on 2D LIDAR Database. The existing obstacle classification method based on 2D LIDAR, has an advantage in terms of accuracy and shorter calculation time. However, it was difficult to classifier the type of obstacle and therefore accurate path planning was not possible. In order to overcome this problem, a method of classifying obstacle type based on width data of obstacle was proposed. However, width data was not sufficient to improve accuracy. In this paper, database was established by width and intensity data; the first classification was processed by the width data; the second classification was processed by the intensity data; classification was processed by comparing to database; result of obstacle classification was determined by finding the one with highest similarity values. An experiment using an actual autonomous vehicle under real environment shows that calculation time declined in comparison to 3D LIDAR and it was possible to classify obstacle using single 2D LIDAR.

Keywords: Database, Segmentation, classification, Lidar, intensity, obstacle, width

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2976
16 A Review on Stormwater Harvesting and Reuse

Authors: Fatema Akram, Mohammad G. Rasul, M. Masud K. Khan, M. Sharif I. I. Amir

Abstract:

Australia is a country of some 7,700 million square kilometers with a population of about 22.6 million. At present water security is a major challenge for Australia. In some areas the use of water resources is approaching and in some parts it is exceeding the limits of sustainability. A focal point of proposed national water conservation programs is the recycling of both urban stormwater and treated wastewater. But till now it is not widely practiced in Australia, and particularly stormwater is neglected. In Australia, only 4% of stormwater and rainwater is recycled, whereas less than 1% of reclaimed wastewater is reused within urban areas. Therefore, accurately monitoring, assessing and predicting the availability, quality and use of this precious resource are required for better management. As stormwater is usually of better quality than untreated sewage or industrial discharge, it has better public acceptance for recycling and reuse, particularly for non-potable use such as irrigation, watering lawns, gardens, etc. Existing stormwater recycling practice is far behind of research and no robust technologies developed for this purpose. Therefore, there is a clear need for using modern technologies for assessing feasibility of stormwater harvesting and reuse. Numerical modeling has, in recent times, become a popular tool for doing this job. It includes complex hydrological and hydraulic processes of the study area. The hydrologic model computes stormwater quantity to design the system components, and the hydraulic model helps to route the flow through stormwater infrastructures. Nowadays water quality module is incorporated with these models. Integration of Geographic Information System (GIS) with these models provides extra advantage of managing spatial information. However for the overall management of a stormwater harvesting project, Decision Support System (DSS) plays an important role incorporating database with model and GIS for the proper management of temporal information. Additionally DSS includes evaluation tools and Graphical user interface. This research aims to critically review and discuss all the aspects of stormwater harvesting and reuse such as available guidelines of stormwater harvesting and reuse, public acceptance of water reuse, the scopes and recommendation for future studies. In addition to these, this paper identifies, understand and address the importance of modern technologies capable of proper management of stormwater harvesting and reuse.

Keywords: Database, numerical modeling, Stormwater Management, geographic information system (GIS), decision support system (DSS), Stormwater Harvesting and Reuse

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2379
15 Enhancing Privacy-Preserving Cloud Database Querying by Preventing Brute Force Attacks

Authors: Ambika Vishal Pawar, Ajay Dani

Abstract:

Considering the complexities involved in Cloud computing, there are still plenty of issues that affect the privacy of data in cloud environment. Unless these problems get solved, we think that the problem of preserving privacy in cloud databases is still open. In tokenization and homomorphic cryptography based solutions for privacy preserving cloud database querying, there is possibility that by colluding with service provider adversary may run brute force attacks that will reveal the attribute values.

In this paper we propose a solution by defining the variant of K –means clustering algorithm that effectively detects such brute force attacks and enhances privacy of cloud database querying by preventing this attacks.

Keywords: Cryptography, Privacy, Cloud Computing, Clustering, Database, k-means

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2193
14 Database Modelling Using WSML in the Specification of a Banking Application

Authors: Omid Sharifi, Member, ACM, Zeki Bayram

Abstract:

We demonstrate through a sample application, Ebanking, that the Web Service Modelling Language Ontology component can be used as a very powerful object-oriented database design language with logic capabilities. Its conceptual syntax allows the definition of class hierarchies, and logic syntax allows the definition of constraints in the database. Relations, which are available for modelling relations of three or more concepts, can be connected to logical expressions, allowing the implicit specification of database content. Using a reasoning tool, logic queries can also be made against the database in simulation mode.

Keywords: Semantic Web, Database, Ontology, E-banking, WSML, WSMO, E-R diagram

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
13 Data Migration between Document-Oriented and Relational Databases

Authors: Bogdan Walek, Cyril Klimes

Abstract:

Current tools for data migration between documentoriented and relational databases have several disadvantages. We propose a new approach for data migration between documentoriented and relational databases. During data migration the relational schema of the target (relational database) is automatically created from collection of XML documents. Proposed approach is verified on data migration between document-oriented database IBM Lotus/ Notes Domino and relational database implemented in relational database management system (RDBMS) MySQL.

Keywords: Database, xml, Data Migration, relational schema, document-oriented database

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3119
12 An Integrated Biotechnology Database of the National Agricultural Information Center in Korea

Authors: Chang Kug Kim, Dong Suk Park, Young Joo Seol, Jang Ho Hahn

Abstract:

The National Agricultural Biotechnology Information Center (NABIC) plays a leading role in the biotechnology information database for agricultural plants in Korea. Since 2002, we have concentrated on functional genomics of major crops, building an integrated biotechnology database for agro-biotech information that focuses on bioinformatics of major agricultural resources such as rice, Chinese cabbage, and microorganisms. In the NABIC, integration-based biotechnology database provides useful information through a user-friendly web interface that allows analysis of genome infrastructure, multiple plants, microbial resources, and living modified organisms.

Keywords: Biotechnology, Database, genome information

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021
11 A Methodology for Creating a Conceptual Model Under Uncertainty

Authors: Bogdan Walek, Cyril Klimes, Jiri Bartos

Abstract:

This article deals with the conceptual modeling under uncertainty. First, the division of information systems with their definition will be described, focusing on those where the construction of a conceptual model is suitable for the design of future information system database. Furthermore, the disadvantages of the traditional approach in creating a conceptual model and database design will be analyzed. A comprehensive methodology for the creation of a conceptual model based on analysis of client requirements and the selection of a suitable domain model is proposed here. This article presents the expert system used for the construction of a conceptual model and is a suitable tool for database designers to create a conceptual model.

Keywords: Relationship, Database, Information System, Methodology, Uncertainty, Fuzzy, conceptual modeling, conceptual model, entity, attribute, conceptual domain model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190
10 A Methodology for Data Migration between Different Database Management Systems

Authors: Bogdan Walek, Cyril Klimes

Abstract:

In present days the area of data migration is very topical. Current tools for data migration in the area of relational database have several disadvantages that are presented in this paper. We propose a methodology for data migration of the database tables and their data between various types of relational database systems (RDBMS). The proposed methodology contains an expert system. The expert system contains a knowledge base that is composed of IFTHEN rules and based on the input data suggests appropriate data types of columns of database tables. The proposed tool, which contains an expert system, also includes the possibility of optimizing the data types in the target RDBMS database tables based on processed data of the source RDBMS database tables. The proposed expert system is shown on data migration of selected database of the source RDBMS to the target RDBMS.

Keywords: Database, Fuzzy, Expert System, Data Migration, relational database, Data Type, relational database management system

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2841
9 Testing Database of Information System using Conceptual Modeling

Authors: Bogdan Walek, Cyril Klimes

Abstract:

This paper focuses on testing database of existing information system. At the beginning we describe the basic problems of implemented databases, such as data redundancy, poor design of database logical structure or inappropriate data types in columns of database tables. These problems are often the result of incorrect understanding of the primary requirements for a database of an information system. Then we propose an algorithm to compare the conceptual model created from vague requirements for a database with a conceptual model reconstructed from implemented database. An algorithm also suggests steps leading to optimization of implemented database. The proposed algorithm is verified by an implemented prototype. The paper also describes a fuzzy system which works with the vague requirements for a database of an information system, procedure for creating conceptual from vague requirements and an algorithm for reconstructing a conceptual model from implemented database.

Keywords: Optimization, Database, Testing, Reconstruction, Requirements, Fuzzy, relational database, conceptual model, information system, uncertain information, database testing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
8 Study of Features for Hand-printed Recognition

Authors: Satish Kumar

Abstract:

The feature extraction method(s) used to recognize hand-printed characters play an important role in ICR applications. In order to achieve high recognition rate for a recognition system, the choice of a feature that suits for the given script is certainly an important task. Even if a new feature required to be designed for a given script, it is essential to know the recognition ability of the existing features for that script. Devanagari script is being used in various Indian languages besides Hindi the mother tongue of majority of Indians. This research examines a variety of feature extraction approaches, which have been used in various ICR/OCR applications, in context to Devanagari hand-printed script. The study is conducted theoretically and experimentally on more that 10 feature extraction methods. The various feature extraction methods have been evaluated on Devanagari hand-printed database comprising more than 25000 characters belonging to 43 alphabets. The recognition ability of the features have been evaluated using three classifiers i.e. k-NN, MLP and SVM.

Keywords: Database, features, classifier, Hand-printed, Devanagari

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1300
7 UB-Tree Indexing for Semantic Query Optimization of Range Queries

Authors: A. Simonet, M. Simonet, S. Housseno

Abstract:

Semantic query optimization consists in restricting the search space in order to reduce the set of objects of interest for a query. This paper presents an indexing method based on UB-trees and a static analysis of the constraints associated to the views of the database and to any constraint expressed on attributes. The result of the static analysis is a partitioning of the object space into disjoint blocks. Through Space Filling Curve (SFC) techniques, each fragment (block) of the partition is assigned a unique identifier, enabling the efficient indexing of fragments by UB-trees. The search space corresponding to a range query is restricted to a subset of the blocks of the partition. This approach has been developed in the context of a KB-DBMS but it can be applied to any relational system.

Keywords: Database, classification, index, views, query optimization, Range query, UB-tree, Space Filling Curve, Integrity Constraint

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
6 New Methods for E-Commerce Databases Designing in Semantic Web Systems (Modern Systems)

Authors: Karim Heidari, Serajodin Katebi, Ali Reza Mahdavi Far

Abstract:

The purpose of this paper is to study Database Models to use them efficiently in E-commerce websites. In this paper we are going to find a method which can save and retrieve information in Ecommerce websites. Thus, semantic web applications can work with, and we are also going to study different technologies of E-commerce databases and we know that one of the most important deficits in semantic web is the shortage of semantic data, since most of the information is still stored in relational databases, we present an approach to map legacy data stored in relational databases into the Semantic Web using virtually any modern RDF query language, as long as it is closed within RDF. To achieve this goal we study XML structures for relational data bases of old websites and eventually we will come up one level over XML and look for a map from relational model (RDM) to RDF. Noting that a large number of semantic webs get advantage of relational model, opening the ways which can be converted to XML and RDF in modern systems (semantic web) is important.

Keywords: Semantic Web, E-Commerce, Database, xml, RDF

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1791
5 Database Development and Discrimination Algorithms for Membrane Protein Functions

Authors: M. Michael Gromiha, Y. Yabuki, K. Imai, P. Horton, K. Fukui

Abstract:

We have developed a database for membrane protein functions, which has more than 3000 experimental data on functionally important amino acid residues in membrane proteins along with sequence, structure and literature information. Further, we have proposed different methods for identifying membrane proteins based on their functions: (i) discrimination of membrane transport proteins from other globular and membrane proteins and classifying them into channels/pores, electrochemical and active transporters, and (ii) β-signal for the insertion of mitochondrial β-barrel outer membrane proteins and potential targets. Our method showed an accuracy of 82% in discriminating transport proteins and 68% to classify them into three different transporters. In addition, we have identified a motif for targeting β-signal and potential candidates for mitochondrial β-barrel membrane proteins. Our methods can be used as effective tools for genome-wide annotations.

Keywords: Database, Membrane Proteins, Discrimination, transporters, β-signal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1212
4 Extraction of Temporal Relation by the Creation of Historical Natural Disaster Archive

Authors: Suguru Yoshioka, Seiichi Tani, Seinosuke Toda

Abstract:

In historical science and social science, the influence of natural disaster upon society is a matter of great interest. In recent years, some archives are made through many hands for natural disasters, however it is inefficiency and waste. So, we suppose a computer system to create a historical natural disaster archive. As the target of this analysis, we consider newspaper articles. The news articles are considered to be typical examples that prescribe the temporal relations of affairs for natural disaster. In order to do this analysis, we identify the occurrences in newspaper articles by some index entries, considering the affairs which are specific to natural disasters, and show the temporal relation between natural disasters. We designed and implemented the automatic system of “extraction of the occurrences of natural disaster" and “temporal relation table for natural disaster."

Keywords: Digital Library, Database, corpus, historical natural disaster, temporal relation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052
3 Corporate Information System Educational Center

Authors: Alquliyev R.M., Kazimov T.H., Mahmudova Sh.C., Mahmudova R.Sh.

Abstract:

The given work is devoted to the description of Information Technologies NAS of Azerbaijan created and successfully maintained in Institute. On the basis of the decision of board of the Supreme Certifying commission at the President of the Azerbaijan Republic and Presidium of National Academy of Sciences of the Azerbaijan Republic, the organization of training courses on Computer Sciences for all post-graduate students and dissertators of the republic, taking of examinations of candidate minima, it was on-line entrusted to Institute of Information Technologies of the National Academy of Sciences of Azerbaijan. Therefore, teaching the computer sciences to post-graduate students and dissertators a scientific - methodological manual on effective application of new information technologies for research works by post-graduate students and dissertators and taking of candidate minima is carried out in the Educational Center. Information and communication technologies offer new opportunities and prospects of their application for teaching and training. The new level of literacy demands creation of essentially new technology of obtaining of scientific knowledge. Methods of training and development, social and professional requirements, globalization of the communicative economic and political projects connected with construction of a new society, depends on a level of application of information and communication technologies in the educational process. Computer technologies develop ideas of programmed training, open completely new, not investigated technological ways of training connected to unique opportunities of modern computers and telecommunications. Computer technologies of training are processes of preparation and transfer of the information to the trainee by means of computer. Scientific and technical progress as well as global spread of the technologies created in the most developed countries of the world is the main proof of the leading role of education in XXI century. Information society needs individuals having modern knowledge. In practice, all technologies, using special technical information means (computer, audio, video) are called information technologies of education.

Keywords: Database, Educational Center, post-graduate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
2 A Comparative Performance Evaluation Model of Mobile Agent Versus Remote Method Invocation for Information Retrieval

Authors: Magdy Saeb, Yousry El-Gamal, Khalid El-Gazzar

Abstract:

The development of distributed systems has been affected by the need to accommodate an increasing degree of flexibility, adaptability, and autonomy. The Mobile Agent technology is emerging as an alternative to build a smart generation of highly distributed systems. In this work, we investigate the performance aspect of agent-based technologies for information retrieval. We present a comparative performance evaluation model of Mobile Agents versus Remote Method Invocation by means of an analytical approach. We demonstrate the effectiveness of mobile agents for dynamic code deployment and remote data processing by reducing total latency and at the same time producing minimum network traffic. We argue that exploiting agent-based technologies significantly enhances the performance of distributed systems in the domain of information retrieval.

Keywords: Distributed Systems, Information Retrieval, Database, Mobile Agent, Performance Evaluation, RMI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
1 A Survey on Life Science Database Citation Frequency in Scientific Literatures

Authors: Hendry Muljadi, Jiro Araki, Satoru Miyazaki, Asao Fujiyama

Abstract:

There are so many databases of various fields of life sciences available online. To find well-used databases, a survey to measure life science database citation frequency in scientific literatures is done. The survey is done by measuring how many scientific literatures which are available on PubMed Central archive cited a specific life science database. This paper presents and discusses the results of the survey.

Keywords: Database, Life Science, metadatabase, PubMedCentral

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1008