Search results for: quiz database
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1619

Search results for: quiz database

1529 Database Playlists: Croatia's Popular Music in the Mirror of Collective Memory

Authors: Diana Grguric, Robert Svetlacic, Vladimir Simovic

Abstract:

Scientific research analytically explores database playlists by studying the memory culture through Croatian popular radio music. The research is based on the scientific analysis of databases developed on the basis of the playlist of ten Croatian radio stations. The most recent Croatian song on Statehood Day 2008-2013 is analyzed in order to gain insight into their (memory) potential in terms of storing, interpreting and presenting a national identity. The research starts with the general assumption that popular music is an efficient identifier, transmitter, and promoter of national identity. The aim of the scientific research of the database was to analytically reveal specific titles of Croatian popular songs that participate in marking memories and analyzing their symbolic capital to gain insight into the popular music experience of the past and to develop a new method of scientifically based analysis of specific databases.

Keywords: specific databases, popular radio music, collective memory, national identity

Procedia PDF Downloads 328
1528 SIPTOX: Spider Toxin Database Information Repository System of Protein Toxins from Spiders by Using MySQL Method

Authors: Iftikhar Tayubi, Tabrej Khan, Rayan Alsulmi, Abdulrahman Labban

Abstract:

Spider produces a special kind of substance. This special kind of substance is called a toxin. The toxin is composed of many types of protein, which differs from species to species. Spider toxin consists of several proteins and non-proteins that include various categories of toxins like myotoxin, neurotoxin, cardiotoxin, dendrotoxin, haemorrhagins, and fibrinolytic enzyme. Protein Sequence information with references of toxins was derived from literature and public databases. From the previous findings, the Spider toxin would be the best choice to treat different types of tumors and cancer. There are many therapeutic regimes, which causes more side effects than treatment hence a different approach must be adopted for the treatment of cancer. The combinations of drugs are being encouraged, and dramatic outcomes are reported. Spider toxin is one of the natural cytotoxic compounds. Hence, it is being used to treat different types of tumors; especially its positive effect on breast cancer is being reported during the last few decades. The efficacy of this database is that it can provide a user-friendly interface for users to retrieve the information about Spiders, toxin and toxin protein of different Spiders species. SPIDTOXD provides a single source information about spider toxins, which will be useful for pharmacologists, neuroscientists, toxicologists, medicinal chemists. The well-ordered and accessible web interface allows users to explore the detail information of Spider and toxin proteins. It includes common name, scientific name, entry id, entry name, protein name and length of the protein sequence. The utility of this database is that it can provide a user-friendly interface for users to retrieve the information about Spider, toxin and toxin protein of different Spider species. The database interfaces will satisfy the demands of the scientific community by providing in-depth knowledge about Spider and its toxin. We have adopted the methodology by using A MySQL and PHP and for designing, we used the Smart Draw. The users can thus navigate from one section to another, depending on the field of interest of the user. This database contains a wealth of information on species, toxins, and clinical data, etc. This database will be useful for the scientific community, basic researchers and those interested in potential pharmaceutical Industry.

Keywords: siptoxd, php, mysql, toxin

Procedia PDF Downloads 147
1527 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 401
1526 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 45
1525 Container Chaos: The Impact of a Casual Game on Learning and Behavior

Authors: Lori L. Scarlatos, Ryan Courtney

Abstract:

This paper explores the impact that playing a casual game can have on a player's learning and subsequent behavior. A casual mobile game, Container Chaos, was created to teach undergraduate students about the carbon footprint of various disposable beverage containers. Learning was tested with a short quiz, and behavior was tested by observing which beverage containers players choose when offered a drink and a snack. The game was tested multiple times, under a variety of different circumstances. Findings of these tests indicate that, with extended play over time, players can learn new information and sometimes even change their behavior as a result. This has implications for how other casual games can be used to teach concepts and possibly modify behavior.

Keywords: behavior, carbon footprint, casual games, environmental impact, material sciences

Procedia PDF Downloads 124
1524 Indoor Localization by Pattern Matching Method Based on Extended Database

Authors: Gyumin Hwang, Jihong Lee

Abstract:

This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1 m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.

Keywords: chirp spread spectrum, indoor localization, pattern-matching, time of arrival, multi-path, mahalanobis distance, reception rate, simultaneous localization and mapping, laser range finder

Procedia PDF Downloads 217
1523 Structure-Based Virtual Screening to Identify CLDN4 Inhibitors

Authors: Jayanthi Sivaraman

Abstract:

Claudins are the important components of the tight junctions that play a key role in paracellular permeability. Among various members of Claudin family, Claudin 4 (CLDN4) is found to be overexpressed in ovarian, pancreatic carcinomas and other epithelial malignancies. Therefore, in this study, an attempt has been made to identify potent inhibitors for CLDN4 from the ZINC database using virtual screening, molecular docking and molecular dynamics simulations. A well refined molecular model of CLDN4 was built using Prime of Schrodinger v10.2(Template- PDB ID: 4P79). Approximately, 6 million compounds from ZINC database are subjected to high-throughput virtual screening (HTVS) against the active site of CLDN4. Molecular docking using GLIDE predicted ARG31, ASN142, ASP146 and ARG158 as critically important residues. Furthermore, three compounds from ZINC database (ZINC96331839, ZINC36533519 and ZINC75819394) showed highly promising ADME properties and binding affinity with stable conformation. The therapeutic efficiency of these lead compounds is evaluated and confirmed by in-vitro and in-vivo studies which leads to the development of novel anti-cancer drugs.

Keywords: ADME property, inhibitors, molecular docking, virtual screening

Procedia PDF Downloads 304
1522 Complex Technology of Virtual Reconstruction: The Case of Kazan Imperial University of XIX-Early XX Centuries

Authors: L. K. Karimova, K. I. Shariukova, A. A. Kirpichnikova, E. A. Razuvalova

Abstract:

This article deals with technology of virtual reconstruction of Kazan Imperial University of XIX - early XX centuries. The paper describes technologies of 3D-visualization of high-resolution models of objects of university space, creation of multi-agent system and connected with these objects organized database of historical sources, variants of use of technologies of immersion into the virtual environment.

Keywords: 3D-reconstruction, multi-agent system, database, university space, virtual reconstruction, virtual heritage

Procedia PDF Downloads 239
1521 Calculation of Methane Emissions from Wetlands in Slovakia via IPCC Methodology

Authors: Jozef Mindas, Jana Skvareninova

Abstract:

Wetlands are a main natural source of methane emissions, but they also represent the important biodiversity reservoirs in the landscape. There are about 26 thousands hectares of wetlands in Slovakia identified via the wetlands monitoring program. Created database of wetlands in Slovakia allows to analyze several ecological processes including also the methane emissions estimate. Based on the information from the database, the first estimate of the methane emissions from wetlands in Slovakia has been done. The IPCC methodology (Tier 1 approach) has been used with proposed emission factors for the ice-free period derived from the climatic data. The highest methane emissions of nearly 550 Gg are associated with the category of fens. Almost 11 Gg of methane is emitted from bogs, and emissions from flooded lands represent less than 8 Gg.

Keywords: bogs, methane emissions, Slovakia, wetlands

Procedia PDF Downloads 257
1520 CMPD: Cancer Mutant Proteome Database

Authors: Po-Jung Huang, Chi-Ching Lee, Bertrand Chin-Ming Tan, Yuan-Ming Yeh, Julie Lichieh Chu, Tin-Wen Chen, Cheng-Yang Lee, Ruei-Chi Gan, Hsuan Liu, Petrus Tang

Abstract:

Whole-exome sequencing focuses on the protein coding regions of disease/cancer associated genes based on a priori knowledge is the most cost-effective method to study the association between genetic alterations and disease. Recent advances in high throughput sequencing technologies and proteomic techniques has provided an opportunity to integrate genomics and proteomics, allowing readily detectable mutated peptides corresponding to mutated genes. Since sequence database search is the most widely used method for protein identification using Mass spectrometry (MS)-based proteomics technology, a mutant proteome database is required to better approximate the real protein pool to improve disease-associated mutated protein identification. Large-scale whole exome/genome sequencing studies were launched by National Cancer Institute (NCI), Broad Institute, and The Cancer Genome Atlas (TCGA), which provide not only a comprehensive report on the analysis of coding variants in diverse samples cell lines but a invaluable resource for extensive research community. No existing database is available for the collection of mutant protein sequences related to the identified variants in these studies. CMPD is designed to address this issue, serving as a bridge between genomic data and proteomic studies and focusing on protein sequence-altering variations originated from both germline and cancer-associated somatic variations.

Keywords: TCGA, cancer, mutant, proteome

Procedia PDF Downloads 560
1519 Digital Development of Cultural Heritage: Construction of Traditional Chinese Pattern Database

Authors: Shaojian Li

Abstract:

The traditional Chinese patterns, as an integral part of Chinese culture, possess unique values in history, culture, and art. However, with the passage of time and societal changes, many of these traditional patterns are at risk of being lost, damaged, or forgotten. To undertake the digital preservation and protection of these traditional patterns, this paper will collect and organize images of traditional Chinese patterns. It will provide exhaustive and comprehensive semantic annotations, creating a resource library of traditional Chinese pattern images. This will support the digital preservation and application of traditional Chinese patterns.

Keywords: digitization of cultural heritage, traditional Chinese patterns, digital humanities, database construction

Procedia PDF Downloads 25
1518 BiLex-Kids: A Bilingual Word Database for Children 5-13 Years Old

Authors: Aris R. Terzopoulos, Georgia Z. Niolaki, Lynne G. Duncan, Mark A. J. Wilson, Antonios Kyparissiadis, Jackie Masterson

Abstract:

As word databases for bilingual children are not available, researchers, educators and textbook writers must rely on monolingual databases. The aim of this study is thus to develop a bilingual word database, BiLex-kids, an online open access developmental word database for 5-13 year old bilingual children who learn Greek as a second language and have English as their dominant one. BiLex-kids is compiled from 120 Greek textbooks used in Greek-English bilingual education in the UK, USA and Australia, and provides word translations in the two languages, pronunciations in Greek, and psycholinguistic variables (e.g. Zipf, Frequency per million, Dispersion, Contextual Diversity, Neighbourhood size). After clearing the textbooks of non-relevant items (e.g. punctuation), algorithms were applied to extract the psycholinguistic indices for all words. As well as one total lexicon, the database produces values for all ages (one lexicon for each age) and for three age bands (one lexicon per age band: 5-8, 9-11, 12-13 years). BiLex-kids provides researchers with accurate figures for a wide range of psycholinguistic variables, making it a useful and reliable research tool for selecting stimuli to examine lexical processing among bilingual children. In addition, it offers children the opportunity to study word spelling, learn translations and listen to pronunciations in their second language. It further benefits educators in selecting age-appropriate words for teaching reading and spelling, while special educational needs teachers will have a resource to control the content of word lists when designing interventions for bilinguals with literacy difficulties.

Keywords: bilingual children, psycholinguistics, vocabulary development, word databases

Procedia PDF Downloads 286
1517 Change of Endocrine and Exocrine Insufficiency on Non-Diabetes Patients after Distal Pancreatectomy: A Nationwide Database Study

Authors: Jin-Ming Wu, Te-Wei Ho, Yu-Wen Tien

Abstract:

Background: The aim of this population-based study was to determine the occurrence of diabetes and exocrine pancreatic insufficiencies (EPI) on non-diabetes subjects receiving distal pancreatectomy (DP). Method: A nationwide cohort study between 2000 and 2010 was collected from the Taiwan National Health Insurance Research Database. Among 3264 DP patients, we identified 1410 non-diabetes and 966 non-diabetes non-EPI. Results. Of 1410 non-diabetes DP subjects, 312 patients (22.1%) developed newly-diagnosed diabetes after PD. On a multiple logistic regression model, co-morbid hyperlipidemia (odds ratio, 1.640; 95% CI, 1.362–2.763; P < 0.001) and pancreatitis (odds ratio, 2.428; 95% CI, 1.889–3.121; P < 0.001) significantly contributed to higher incidences of diabetes after DP. Moreover, 380 subjects (39.3%) developed EPI, and pancreatic cancer is the statistically significant risk factor (odds ratio, 4.663; 95% CI, 2.108–6.085; P < 0.001). Conclusion: The patients with co-morbid hyperlipidemia and chronic pancreatitis had higher rates of newly-diagnosed diabetes after DP, moreover, pancreatic cancer subjects had higher rates of pancreatic exocrine insufficiency after DP. The clinicians should be alert to follow up glucose metabolism and clinical symptoms of fat intolerance for DP patients.

Keywords: distal pancreatectomy, National database, diabetes, exocrine insufficiency

Procedia PDF Downloads 174
1516 Privacy Preserving in Association Rule Mining on Horizontally Partitioned Database

Authors: Manvar Sagar, Nikul Virpariya

Abstract:

The advancement in data mining techniques plays an important role in many applications. In context of privacy and security issues, the problems caused by association rule mining technique are investigated by many research scholars. It is proved that the misuse of this technique may reveal the database owner’s sensitive and private information to others. Many researchers have put their effort to preserve privacy in Association Rule Mining. Amongst the two basic approaches for privacy preserving data mining, viz. Randomization based and Cryptography based, the later provides high level of privacy but incurs higher computational as well as communication overhead. Hence, it is necessary to explore alternative techniques that improve the over-heads. In this work, we propose an efficient, collusion-resistant cryptography based approach for distributed Association Rule mining using Shamir’s secret sharing scheme. As we show from theoretical and practical analysis, our approach is provably secure and require only one time a trusted third party. We use secret sharing for privately sharing the information and code based identification scheme to add support against malicious adversaries.

Keywords: Privacy, Privacy Preservation in Data Mining (PPDM), horizontally partitioned database, EMHS, MFI, shamir secret sharing

Procedia PDF Downloads 377
1515 SQL Generator Based on MVC Pattern

Authors: Chanchai Supaartagorn

Abstract:

Structured Query Language (SQL) is the standard de facto language to access and manipulate data in a relational database. Although SQL is a language that is simple and powerful, most novice users will have trouble with SQL syntax. Thus, we are presenting SQL generator tool which is capable of translating actions and displaying SQL commands and data sets simultaneously. The tool was developed based on Model-View-Controller (MVC) pattern. The MVC pattern is a widely used software design pattern that enforces the separation between the input, processing, and output of an application. Developers take full advantage of it to reduce the complexity in architectural design and to increase flexibility and reuse of code. In addition, we use White-Box testing for the code verification in the Model module.

Keywords: MVC, relational database, SQL, White-Box testing

Procedia PDF Downloads 400
1514 Integrating a Universal Forensic DNA Database: Anticipated Deterrent Effects

Authors: Karen Fang

Abstract:

Investigative genetic genealogy has attracted much interest in both the field of ethics and the public eye due to its global application in criminal cases. Arguments have been made regarding privacy and informed consent, especially with law enforcement using consumer genetic testing results to convict individuals. In the case of public interest, DNA databases have the strong potential to significantly reduce crime, which in turn leads to safer communities and better futures. With the advancement of genetic technologies, the integration of a universal forensic DNA database in violent crimes, crimes against children, and missing person cases is expected to deter crime while protecting one’s privacy. Rather than collecting whole genomes from the whole population, STR profiles can be used to identify unrelated individuals without compromising personal information such as physical appearance, disease risk, and geographical origin, and additionally, reduce cost and storage space. STR DNA profiling is already used in the forensic science field and going a step further benefits several areas, including the reduction in recidivism, improved criminal court case turnaround time, and just punishment. Furthermore, adding individuals to the database as early as possible prevents young offenders and first-time offenders from participating in criminal activity. It is important to highlight that DNA databases should be inclusive and tightly governed, and the misconception on the use of DNA based on crime television series and other media sources should be addressed. Nonetheless, deterrent effects have been observed in countries like the US and Denmark with DNA databases that consist of serious violent offenders. Fewer crimes were reported, and fewer people were convicted of those crimes- a favorable outcome, not even the death penalty could provide. Currently, there is no better alternative than a universal forensic DNA database made up of STR profiles. It can open doors for investigative genetic genealogy and fostering better communities. Expanding the appropriate use of DNA databases is ethically acceptable and positively impacts the public.

Keywords: bioethics, deterrent effects, DNA database, investigative genetic genealogy, privacy, public interest

Procedia PDF Downloads 126
1513 System of Quality Automation for Documents (SQAD)

Authors: R. Babi Saraswathi, K. Divya, A. Habeebur Rahman, D. B. Hari Prakash, S. Jayanth, T. Kumar, N. Vijayarangan

Abstract:

Document automation is the design of systems and workflows, assembling repetitive documents to meet the specific business needs. In any organization or institution, documenting employee’s information is very important for both employees as well as management. It shows an individual’s progress to the management. Many documents of the employee are in the form of papers, so it is very difficult to arrange and for future reference we need to spend more time in getting the exact document. Also, it is very tedious to generate reports according to our needs. The process gets even more difficult on getting approvals and hence lacks its security aspects. This project overcomes the above-stated issues. By storing the details in the database and maintaining the e-documents, the automation system reduces the manual work to a large extent. Then the approval process of some important documents can be done in a much-secured manner by using Digital Signature and encryption techniques. Details are maintained in the database and e-documents are stored in specific folders and generation of various kinds of reports is possible. Moreover, an efficient search method is implemented is used in the database. Automation supporting document maintenance in many aspects is useful for minimize data entry, reduce the time spent on proof-reading, avoids duplication, and reduce the risks associated with the manual error, etc.

Keywords: e-documents, automation, digital signature, encryption

Procedia PDF Downloads 360
1512 Biimodal Biometrics System Using Fusion of Iris and Fingerprint

Authors: Attallah Bilal, Hendel Fatiha

Abstract:

This paper proposes the bimodal biometrics system for identity verification iris and fingerprint, at matching score level architecture using weighted sum of score technique. The features are extracted from the pre processed images of iris and fingerprint. These features of a query image are compared with those of a database image to obtain matching scores. The individual scores generated after matching are passed to the fusion module. This module consists of three major steps i.e., normalization, generation of similarity score and fusion of weighted scores. The final score is then used to declare the person as genuine or an impostor. The system is tested on CASIA database and gives an overall accuracy of 91.04% with FAR of 2.58% and FRR of 8.34%.

Keywords: iris, fingerprint, sum rule, fusion

Procedia PDF Downloads 338
1511 Characteristics Features and Action Mechanism of Some Country Made Pistols

Authors: Ajitesh Pal, Arpan Datta Roy, H. K. Pratihari

Abstract:

The different illegal firearms crudely made by skilled gunsmith from scrap materials are popularly known as country made firearms. Such firearms along with improvised ammunition are clandestinely marketed at the cheaper price without any license to the extremist group, criminal, poachers and firearm lovers. As per National Crime Records Bureau (NCRB), MHA, Govt of India about 80% firearm cases are committed by country made/improvised firearms. The ballistic division of the laboratory has examined a good number of cases. The analysis of firearm cases received for forensic examination revealed that 7.65mm calibre pistols mostly improvised firearm are commonly used in firearm related crime cases. In the present communication, physical parameters and other characteristics features of some 7.65mm calibre pistols have been discussed in detail. The detailed study on country made (CM) firearm will help to prepare a database related to type of material used, origin of the raw material and tools used for inscription. The study also includes to establish the chemistry of propellants & head stamp pattern. The database will be helpful to the firearm examiners, researchers, students pursuing study on forensic science as reference material.

Keywords: improvised pistol, stringent gun law, working mechanism, parameters, database

Procedia PDF Downloads 43
1510 Gamification to Enhance Learning Using Gagne's Learning Model

Authors: M. L. McLain, R. Sreelakshmi, Abhishek, Rajeshwaran, Bhavani Rao, Kamal Bijlani, R. Jayakrishnan

Abstract:

Technology enhanced learning has brought drastic changes in the field of education in the modern world. In this study we explore a novel way to improve how high school students learn by building a serious game that uses a pedagogical model developed by Robert Gagne. By integrating serious game with principles of Gagne’s learning model can provide engaging and meaningful instructions to students. The game developed in this study is a waste sorting game that can easily and succinctly demonstrate the principles of this learning model. All the tasks in the game that the player has to accomplish correspond to Gagne’s “Nine Events of Learning”. A quiz is incorporated in order to get data on the progress made by the player in understanding the concept and as well as to assess them. Additionally, an experimental study was conducted which demonstrates that game based learning using Gagne’s event is more effective than a traditional classroom setup.

Keywords: game based learning, sorting and recycling of waste, Gagne’s learning model, e-Learning, technology enhanced learning

Procedia PDF Downloads 599
1509 The Modification of Convolutional Neural Network in Fin Whale Identification

Authors: Jiahao Cui

Abstract:

In the past centuries, due to climate change and intense whaling, the global whale population has dramatically declined. Among the various whale species, the fin whale experienced the most drastic drop in number due to its popularity in whaling. Under this background, identifying fin whale calls could be immensely beneficial to the preservation of the species. This paper uses feature extraction to process the input audio signal, then a network based on AlexNet and three networks based on the ResNet model was constructed to classify fin whale calls. A mixture of the DOSITS database and the Watkins database was used during training. The results demonstrate that a modified ResNet network has the best performance considering precision and network complexity.

Keywords: convolutional neural network, ResNet, AlexNet, fin whale preservation, feature extraction

Procedia PDF Downloads 88
1508 A Novel Framework for User-Friendly Ontology-Mediated Access to Relational Databases

Authors: Efthymios Chondrogiannis, Vassiliki Andronikou, Efstathios Karanastasis, Theodora Varvarigou

Abstract:

A large amount of data is typically stored in relational databases (DB). The latter can efficiently handle user queries which intend to elicit the appropriate information from data sources. However, direct access and use of this data requires the end users to have an adequate technical background, while they should also cope with the internal data structure and values presented. Consequently the information retrieval is a quite difficult process even for IT or DB experts, taking into account the limited contributions of relational databases from the conceptual point of view. Ontologies enable users to formally describe a domain of knowledge in terms of concepts and relations among them and hence they can be used for unambiguously specifying the information captured by the relational database. However, accessing information residing in a database using ontologies is feasible, provided that the users are keen on using semantic web technologies. For enabling users form different disciplines to retrieve the appropriate data, the design of a Graphical User Interface is necessary. In this work, we will present an interactive, ontology-based, semantically enable web tool that can be used for information retrieval purposes. The tool is totally based on the ontological representation of underlying database schema while it provides a user friendly environment through which the users can graphically form and execute their queries.

Keywords: ontologies, relational databases, SPARQL, web interface

Procedia PDF Downloads 249
1507 Rendering Cognition Based Learning in Coherence with Development within the Context of PostgreSQL

Authors: Manuela Nayantara Jeyaraj, Senuri Sucharitharathna, Chathurika Senarath, Yasanthy Kanagaraj, Indraka Udayakumara

Abstract:

PostgreSQL is an Object Relational Database Management System (ORDBMS) that has been in existence for a while. Despite the superior features that it wraps and packages to manage database and data, the database community has not fully realized the importance and advantages of PostgreSQL. Hence, this research tends to focus on provisioning a better environment of development for PostgreSQL in order to induce the utilization and elucidate the importance of PostgreSQL. PostgreSQL is also known to be the world’s most elementary SQL-compliant open source ORDBMS. But, users have not yet resolved to PostgreSQL due to the facts that it is still under the layers and the complexity of its persistent textual environment for an introductory user. Simply stating this, there is a dire need to explicate an easy way of making the users comprehend the procedure and standards with which databases are created, tables and the relationships among them, manipulating queries and their flow based on conditions in PostgreSQL to help the community resolve to PostgreSQL at an augmented rate. Hence, this research under development within the context tends to initially identify the dominant features provided by PostgreSQL over its competitors. Following the identified merits, an analysis on why the database community holds a hesitance in migrating to PostgreSQL’s environment will be carried out. These will be modulated and tailored based on the scope and the constraints discovered. The resultant of the research proposes a system that will serve as a designing platform as well as a learning tool that will provide an interactive method of learning via a visual editor mode and incorporate a textual editor for well-versed users. The study is based on conjuring viable solutions that analyze a user’s cognitive perception in comprehending human computer interfaces and the behavioural processing of design elements. By providing a visually draggable and manipulative environment to work with Postgresql databases and table queries, it is expected to highlight the elementary features displayed by Postgresql over any other existent systems in order to grasp and disseminate the importance and simplicity offered by this to a hesitant user.

Keywords: cognition, database, PostgreSQL, text-editor, visual-editor

Procedia PDF Downloads 248
1506 Analysis and Prediction of COVID-19 by Using Recurrent LSTM Neural Network Model in Machine Learning

Authors: Grienggrai Rajchakit

Abstract:

As we all know that coronavirus is announced as a pandemic in the world by WHO. It is speeded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self-preventive measures are the best strategies. As of now, many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the coronavirus disease behaves in an exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To make this prediction of active cases, we need a database. The database of COVID-19 is downloaded from the KAGGLE website and is analyzed by applying a recurrent LSTM neural network with univariant features to predict the number of active cases of patients suffering from the corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with a testing dataset to predict the number of active cases in a particular state; here, we have concentrated on Andhra Pradesh state.

Keywords: COVID-19, coronavirus, KAGGLE, LSTM neural network, machine learning

Procedia PDF Downloads 136
1505 Block Mining: Block Chain Enabled Process Mining Database

Authors: James Newman

Abstract:

Process mining is an emerging technology that looks to serialize enterprise data in time series data. It has been used by many companies and has been the subject of a variety of research papers. However, the majority of current efforts have looked at how to best create process mining from standard relational databases. This paper is the first pass at outlining a database custom-built for the minimal viable product of process mining. We present Block Miner, a blockchain protocol to store process mining data across a distributed network. We demonstrate the feasibility of storing process mining data on the blockchain. We present a proof of concept and show how the intersection of these two technologies helps to solve a variety of issues, including but not limited to ransomware attacks, tax documentation, and conflict resolution.

Keywords: blockchain, process mining, memory optimization, protocol

Procedia PDF Downloads 60
1504 Modified Active (MA) Algorithm to Generate Semantic Web Related Clustered Hierarchy for Keyword Search

Authors: G. Leena Giri, Archana Mathur, S. H. Manjula, K. R. Venugopal, L. M. Patnaik

Abstract:

Keyword search in XML documents is based on the notion of lowest common ancestors in the labelled trees model of XML documents and has recently gained a lot of research interest in the database community. In this paper, we propose the Modified Active (MA) algorithm which is an improvement over the active clustering algorithm by taking into consideration the entity aspect of the nodes to find the level of the node pertaining to a particular keyword input by the user. A portion of the bibliography database is used to experimentally evaluate the modified active algorithm and results show that it performs better than the active algorithm. Our modification improves the response time of the system and thereby increases the efficiency of the system.

Keywords: keyword matching patterns, MA algorithm, semantic search, knowledge management

Procedia PDF Downloads 376
1503 Geo Spatial Database for Railway Assets Management

Authors: Muhammad Umar

Abstract:

Safety and Assets management is considering a backbone of every department. GIS in the Railway become very important to Manage Assets and Security through Digital Maps and Web based GIS Maps. It provides a complete frame of work to the organization for the management of assets. Pakistan Railway is the most common and safest mode of traveling in Pakistan. Due to ever-increasing demand of transporting huge amount of information generated from various sources and this information must be accurate. This creates problems for Passengers and Administration that causes finical and time loss. GIS Solve this problem by Digital Maps & Database. It provides you a real time Spatial and Statistical analysis that helps you to communicate and exchange the information in a sophisticated way to the users. GIS Based Web system provides a facility to different end user to make query at a time as per requirements. This GIS System provides an advancement in an organization for a complete Monitoring, Safety and Decision System for tracks, Stations and Junctions that further use for the Analysis of different areas i.e. analysis of tracks, junctions and Stations in case of reconstruction, Rescue for rail accidents and Natural disasters .This Research work helps to reduce the financial loss and reduce human mistakes helps you provide a complete security and Management system of assets.

Keywords: Geographical Information System (GIS) for assets management, geo spatial database, railway assets management, Pakistan

Procedia PDF Downloads 460
1502 TomoTherapy® System Repositioning Accuracy According to Treatment Localization

Authors: Veronica Sorgato, Jeremy Belhassen, Philippe Chartier, Roddy Sihanath, Nicolas Docquiere, Jean-Yves Giraud

Abstract:

We analyzed the image-guided radiotherapy method used by the TomoTherapy® System (Accuray Corp.) for patient repositioning in clinical routine. The TomoTherapy® System computes X, Y, Z and roll displacements to match the reference CT, on which the dosimetry has been performed, with the pre-treatment MV CT. The accuracy of the repositioning method has been studied according to the treatment localization. For this, a database of 18774 treatment sessions, performed during 2 consecutive years (2016-2017 period) has been used. The database includes the X, Y, Z and roll displacements proposed by TomoTherapy® System as well as the manual correction of these proposals applied by the radiation therapist. This manual correction aims to further improve the repositioning based on the clinical situation and depends on the structures surrounding the target tumor tissue. The statistical analysis performed on the database aims to define repositioning limits to be used as security and guiding tool for the manual adjustment implemented by the radiation therapist. This tool will participate not only to notify potential repositioning errors but also to further improve patient positioning for optimal treatment.

Keywords: accuracy, IGRT MVCT, image-guided radiotherapy megavoltage computed tomography, statistical analysis, tomotherapy, localization

Procedia PDF Downloads 200
1501 Bundle Block Detection Using Spectral Coherence and Levenberg Marquardt Neural Network

Authors: K. Padmavathi, K. Sri Ramakrishna

Abstract:

This study describes a procedure for the detection of Left and Right Bundle Branch Block (LBBB and RBBB) ECG patterns using spectral Coherence(SC) technique and LM Neural Network. The Coherence function finds common frequencies between two signals and evaluate the similarity of the two signals. The QT variations of Bundle Blocks are observed in lead V1 of ECG. Spectral Coherence technique uses Welch method for calculating PSD. For the detection of normal and Bundle block beats, SC output values are given as the input features for the LMNN classifier. Overall accuracy of LMNN classifier is 99.5 percent. The data was collected from MIT-BIH Arrhythmia database.

Keywords: bundle block, SC, LMNN classifier, welch method, PSD, MIT-BIH, arrhythmia database

Procedia PDF Downloads 250
1500 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic

Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam

Abstract:

In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.

Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic

Procedia PDF Downloads 306