Search results for: biological databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 743

Search results for: biological databases

503 Removal of Lead in High Rate Activated Sludge System

Authors: Mamdouh Y. Saleh, Gaber EL Enany, Medhat H. Elzahar, Mohamed Z. Elshikhipy, Rana Hamouda

Abstract:

The heavy metals pollution in water, sediments and fish of Lake Manzala affected form the disposal of wastewater, industrial and agricultural drainage water into the lake on the environmental situation. A pilot plant with an industrial discharge flow of 135L/h designed according to the activated sludge plant to simulate between the biological and chemical treatment with the addition of alum to the aeration tank with dosages of 100, 150, 200 and 250 mg/L. The industrial discharge had concentrations of Lead and BOD5 with an average range 1.22, 145mg/L respectively. That means the average Pb was high up to 25 times than the allowed permissible concentration. The optimization of the chemical-biological process using 200mg/L Alum dosage compared has improvement of Lead and BOD5 removal efficiency to 61.76% and 56% respectively.

Keywords: Industrial wastewater, Activated sludge, BOD5, Lead, Alum salt.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2533
502 Localizing Acoustic Touch Impacts using Zip-stuffing in Complex k-space Domain

Authors: R. Bremananth, Andy W. H. Khong, A. Chitra

Abstract:

Visualizing sound and noise often help us to determine an appropriate control over the source localization. Near-field acoustic holography (NAH) is a powerful tool for the ill-posed problem. However, in practice, due to the small finite aperture size, the discrete Fourier transform, FFT based NAH couldn-t predict the activeregion- of-interest (AROI) over the edges of the plane. Theoretically few approaches were proposed for solving finite aperture problem. However most of these methods are not quite compatible for the practical implementation, especially near the edge of the source. In this paper, a zip-stuffing extrapolation approach has suggested with 2D Kaiser window. It is operated on wavenumber complex space to localize the predicted sources. We numerically form a practice environment with touch impact databases to test the localization of sound source. It is observed that zip-stuffing aperture extrapolation and 2D window with evanescent components provide more accuracy especially in the small aperture and its derivatives.

Keywords: Acoustic source localization, Near-field acoustic holography (NAH), FFT, Extrapolation, k-space wavenumber errors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
501 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha

Abstract:

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Keywords: Feature fusion, image retrieval, membership function, normalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322
500 PIELG: A Protein Interaction Extraction Systemusing a Link Grammar Parser from Biomedical Abstracts

Authors: Rania A. Abul Seoud, Nahed H. Solouma, Abou-Baker M. Youssef, Yasser M. Kadah

Abstract:

Due to the ever growing amount of publications about protein-protein interactions, information extraction from text is increasingly recognized as one of crucial technologies in bioinformatics. This paper presents a Protein Interaction Extraction System using a Link Grammar Parser from biomedical abstracts (PIELG). PIELG uses linkage given by the Link Grammar Parser to start a case based analysis of contents of various syntactic roles as well as their linguistically significant and meaningful combinations. The system uses phrasal-prepositional verbs patterns to overcome preposition combinations problems. The recall and precision are 74.4% and 62.65%, respectively. Experimental evaluations with two other state-of-the-art extraction systems indicate that PIELG system achieves better performance. For further evaluation, the system is augmented with a graphical package (Cytoscape) for extracting protein interaction information from sequence databases. The result shows that the performance is remarkably promising.

Keywords: Link Grammar Parser, Interaction extraction, protein-protein interaction, Natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
499 Artificial Neural Network Development by means of Genetic Programming with Graph Codification

Authors: Daniel Rivero, Julián Dorado, Juan R. Rabuñal, Alejandro Pazos, Javier Pereira

Abstract:

The development of Artificial Neural Networks (ANNs) is usually a slow process in which the human expert has to test several architectures until he finds the one that achieves best results to solve a certain problem. This work presents a new technique that uses Genetic Programming (GP) for automatically generating ANNs. To do this, the GP algorithm had to be changed in order to work with graph structures, so ANNs can be developed. This technique also allows the obtaining of simplified networks that solve the problem with a small group of neurons. In order to measure the performance of the system and to compare the results with other ANN development methods by means of Evolutionary Computation (EC) techniques, several tests were performed with problems based on some of the most used test databases. The results of those comparisons show that the system achieves good results comparable with the already existing techniques and, in most of the cases, they worked better than those techniques.

Keywords: Artificial Neural Networks, Evolutionary Computation, Genetic Programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
498 The Resource Description Framework (RDF) as a Modern Structure for Medical Data

Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune

Abstract:

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.

Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009
497 Internal Behavior of Biological Nutrient Removal System for Advanced Wastewater Treatment

Authors: J. K. Choi , D. W. Kim, H. S. Shin, H. J. Yeon, B. K. Kim, Yeon. Fan, D. Chang, S. B. Han, J.M. Hur, B. R. Jung, S. M. Park

Abstract:

The purpose of this research was develop a biological nutrient removal (BNR) system which has low energy consumption, sludge production, and land usage. These indicate that BNR system could be a alternative of future wastewater treatment in ubiquitous city(U-city). Organics and nitrogen compounds could be removed by this system so that secondary or tertiary stages of wastewater treatment satisfy their standards. This system was composed of oxic and anoxic filter filed with PVDC and POM media. Anoxic/oxic filter system operated under empty bed contact time of 4 hours by increasing recirculation ratio from 0 to 100 %. The system removals of total nitrogen and COD were 76.3% and 93%, respectively. To be observed internal behavior in this system SCOD, NH3-N, and NO3-N were conducted and removal shows range of 25~100%, 59~99%, and 70~100%, respectively.

Keywords: BNR, nitrification, denitrification, organics removal, anoxic, oxic, advanced treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
496 A Query Optimization Strategy for Autonomous Distributed Database Systems

Authors: Dina K. Badawy, Dina M. Ibrahim, Alsayed A. Sallam

Abstract:

Distributed database is a collection of logically related databases that cooperate in a transparent manner. Query processing uses a communication network for transmitting data between sites. It refers to one of the challenges in the database world. The development of sophisticated query optimization technology is the reason for the commercial success of database systems, which complexity and cost increase with increasing number of relations in the query. Mariposa, query trading and query trading with processing task-trading strategies developed for autonomous distributed database systems, but they cause high optimization cost because of involvement of all nodes in generating an optimal plan. In this paper, we proposed a modification on the autonomous strategy K-QTPT that make the seller’s nodes with the lowest cost have gradually high priorities to reduce the optimization time. We implement our proposed strategy and present the results and analysis based on those results.

Keywords: Autonomous strategies, distributed database systems, high priority, query optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
495 The Effect of Biochar, Inoculated Biochar and Compost Biological Component of the Soil

Authors: H. Dvořáčková, I. Mikajlo, J. Záhora, J. Elbl

Abstract:

Biochar can be produced from the waste matter and its application has been associated with returning of carbon in large amounts into the soil. The impacts of this material on physical and chemical properties of soil have been described. The biggest part of the research work is dedicated to the hypothesis of this material’s toxic effects on the soil life regarding its effect on the soil biological component. At present, it has been worked on methods which could eliminate these undesirable properties of biochar. One of the possibilities is to mix biochar with organic material, such as compost, or focusing on the natural processes acceleration in the soil. In the experiment has been used as the addition of compost as well as the elimination of toxic substances by promoting microbial activity in aerated water environment. Biochar was aerated for 7 days in a container with a volume of 20 l. This way modified biochar had six times higher biomass production and reduce mineral nitrogen leaching. Better results have been achieved by mixing biochar with compost.

Keywords: Leaching of nitrogen, soil, biochar, compost.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3005
494 The Use of Mobile Phones by Refugees to Create Social Connectedness: A Literature Review

Authors: Sarah Vuningoma, Maria Rosa Lorini, Wallace Chigona

Abstract:

Mobile phones are one of the main tools for promoting the wellbeing of people and supporting the integration of communities on the margins such as refugees. Information and Communication Technology has the potential to contribute towards reducing isolation, loneliness, and to assist in improving interpersonal relations and fostering acculturation processes. Therefore, the use of mobile phones by refugees might contribute to their social connectedness. This paper aims to demonstrate how existing literature has shown how the use of mobile phones by refugees should engender social connectedness amongst the refugees. Data for the study are drawn from existing literature; we searched a number of electronic databases for papers published between 2010 and 2019. The main findings of the study relate to the use of mobile phones by refugees to (i) create a sense of belonging, (ii) maintain relationships, and (iii) advance the acculturation process. The analysis highlighted a gap in the research over refugees and social connectedness. In particular, further studies should consider evaluating the differences between those who have a refugee permit, those who are waiting for the refugee permit, and those whose request was denied.

Keywords: Belonging, mobile phones, refugees, social connectedness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 809
493 Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images

Authors: A. Biran, P. Sobhe Bidari, K. Raahemifar

Abstract:

Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.

Keywords: Diabetic retinopathy, fundus, CHT, exudates, hemorrhages.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2613
492 Comparing Spontaneous Hydrolysis Rates of Activated Models of DNA and RNA

Authors: Mohamed S. Sasi, Adel M. Mlitan, Abdulfattah M. Alkherraz

Abstract:

This research project aims to investigate difference in relative rates concerning phosphoryl transfer relevant to biological catalysis of DNA and RNA in the pH-independent reactions. Activated Models of DNA and RNA for alkyl-aryl phosphate diesters (with 4-nitrophenyl as a good leaving group) have successfully been prepared to gather kinetic parameters. Eyring plots for the pH– independent hydrolysis of 1 and 2 were established at different temperatures in the range 100–160 °C. These measurements have been used to provide a better estimate for the difference in relative rates between the reactivity of DNA and RNA cleavage. Eyring plot gave an extrapolated rate of kH2O = 1 × 10-10 s -1 for 1 (RNA model) and 2 (DNA model) at 25°C. Comparing the reactivity of RNA model and DNA model shows that the difference in relative rates in the pH-independent reactions is surprisingly very similar at 25°. This allows us to obtain chemical insights into how biological catalysts such as enzymes may have evolved to perform their current functions.

Keywords: DNA & RNA Models, Relative Rates, Reactivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2373
491 Methods of Geodesic Distance in Two-Dimensional Face Recognition

Authors: Rachid Ahdid, Said Safi, Bouzid Manaut

Abstract:

In this paper, we present a comparative study of three methods of 2D face recognition system such as: Iso-Geodesic Curves (IGC), Geodesic Distance (GD) and Geodesic-Intensity Histogram (GIH). These approaches are based on computing of geodesic distance between points of facial surface and between facial curves. In this study we represented the image at gray level as a 2D surface in a 3D space, with the third coordinate proportional to the intensity values of pixels. In the classifying step, we use: Neural Networks (NN), K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The images used in our experiments are from two wellknown databases of face images ORL and YaleB. ORL data base was used to evaluate the performance of methods under conditions where the pose and sample size are varied, and the database YaleB was used to examine the performance of the systems when the facial expressions and lighting are varied.

Keywords: 2D face recognition, Geodesic distance, Iso-Geodesic Curves, Geodesic-Intensity Histogram, facial surface, Neural Networks, K-Nearest Neighbor, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
490 Small Businesses' Decision to have a Website Saudi Arabia Case Study

Authors: M. Al-hawari, H. AL–Yamani, B. Izwawa

Abstract:

Recognizing the increasing importance of using the Internet to conduct business, this paper looks at some related matters associated with small businesses making a decision of whether or not to have a Website and go online. Small businesses in Saudi Arabia struggle to have this decision. For organizations, to fully go online, conduct business and provide online information services, they need to connect their database to the Web. Some issues related to doing that might be beyond the capabilities of most small businesses in Saudi Arabia, such as Website management, technical issues and security concerns. Here we focus on a small business firm in Saudi Arabia (Case Study), discussing the issues related to going online decision and the firm's options of what to do and how to do it. The paper suggested some valuable solutions of connecting databases to the Web. It also discusses some of the important issues related to online information services and e-commerce, mainly Web hosting options and security issues.

Keywords: E-Commerce, Saudi Arabia, Small business, Webdatabase connection, Web hosting, World Wide Web (Web).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1950
489 Improvement in Power Transformer Intelligent Dissolved Gas Analysis Method

Authors: S. Qaedi, S. Seyedtabaii

Abstract:

Non-Destructive evaluation of in-service power transformer condition is necessary for avoiding catastrophic failures. Dissolved Gas Analysis (DGA) is one of the important methods. Traditional, statistical and intelligent DGA approaches have been adopted for accurate classification of incipient fault sources. Unfortunately, there are not often enough faulty patterns required for sufficient training of intelligent systems. By bootstrapping the shortcoming is expected to be alleviated and algorithms with better classification success rates to be obtained. In this paper the performance of an artificial neural network, K-Nearest Neighbour and support vector machine methods using bootstrapped data are detailed and shown that while the success rate of the ANN algorithms improves remarkably, the outcome of the others do not benefit so much from the provided enlarged data space. For assessment, two databases are employed: IEC TC10 and a dataset collected from reported data in papers. High average test success rate well exhibits the remarkable outcome.

Keywords: Dissolved gas analysis, Transformer incipient fault, Artificial Neural Network, Support Vector Machine (SVM), KNearest Neighbor (KNN)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2717
488 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2352
487 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: Bioassay, machine learning, preprocessing, virtual screen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 961
486 On the Mathematical Structure and Algorithmic Implementation of Biochemical Network Models

Authors: Paola Lecca

Abstract:

Modeling and simulation of biochemical reactions is of great interest in the context of system biology. The central dogma of this re-emerging area states that it is system dynamics and organizing principles of complex biological phenomena that give rise to functioning and function of cells. Cell functions, such as growth, division, differentiation and apoptosis are temporal processes, that can be understood if they are treated as dynamic systems. System biology focuses on an understanding of functional activity from a system-wide perspective and, consequently, it is defined by two hey questions: (i) how do the components within a cell interact, so as to bring about its structure and functioning? (ii) How do cells interact, so as to develop and maintain higher levels of organization and functions? In recent years, wet-lab biologists embraced mathematical modeling and simulation as two essential means toward answering the above questions. The credo of dynamics system theory is that the behavior of a biological system is given by the temporal evolution of its state. Our understanding of the time behavior of a biological system can be measured by the extent to which a simulation mimics the real behavior of that system. Deviations of a simulation indicate either limitations or errors in our knowledge. The aim of this paper is to summarize and review the main conceptual frameworks in which models of biochemical networks can be developed. In particular, we review the stochastic molecular modelling approaches, by reporting the principal conceptualizations suggested by A. A. Markov, P. Langevin, A. Fokker, M. Planck, D. T. Gillespie, N. G. van Kampfen, and recently by D. Wilkinson, O. Wolkenhauer, P. S. Jöberg and by the author.

Keywords: Mathematical structure, algorithmic implementation, biochemical network models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
485 A Watermarking System Using the Wavelet Technique for Satellite Images

Authors: I. R. Farah, I. B. Ismail, M. B. Ahmed

Abstract:

The huge development of new technologies and the apparition of open communication system more and more sophisticated create a new challenge to protect digital content from piracy. Digital watermarking is a recent research axis and a new technique suggested as a solution to these problems. This technique consists in inserting identification information (watermark) into digital data (audio, video, image, databases...) in an invisible and indelible manner and in such a way not to degrade original medium-s quality. Moreover, we must be able to correctly extract the watermark despite the deterioration of the watermarked medium (i.e attacks). In this paper we propose a system for watermarking satellite images. We chose to embed the watermark into frequency domain, precisely the discrete wavelet transform (DWT). We applied our algorithm on satellite images of Tunisian center. The experiments show satisfying results. In addition, our algorithm showed an important resistance facing different attacks, notably the compression (JEPG, JPEG2000), the filtering, the histogram-s manipulation and geometric distortions such as rotation, cropping, scaling.

Keywords: Digital data watermarking, Spatial Database, Satellite images, Discrete Wavelets Transform (DWT).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
484 Programming Language Extension Using Structured Query Language for Database Access

Authors: Chapman Eze Nnadozie

Abstract:

Relational databases constitute a very vital tool for the effective management and administration of both personal and organizational data. Data access ranges from a single user database management software to a more complex distributed server system. This paper intends to appraise the use a programming language extension like structured query language (SQL) to establish links to a relational database (Microsoft Access 2013) using Visual C++ 9 programming language environment. The methodology used involves the creation of tables to form a database using Microsoft Access 2013, which is Object Linking and Embedding (OLE) database compliant. The SQL command is used to query the tables in the database for easy extraction of expected records inside the visual C++ environment. The findings of this paper reveal that records can easily be accessed and manipulated to filter exactly what the user wants, such as retrieval of records with specified criteria, updating of records, and deletion of part or the whole records in a table.

Keywords: Data access, database, database management system, OLE, programming language, records, relational database, software, SQL, table.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 704
483 Examining the Value of Attribute Scores for Author-Supplied Keyphrases in Automatic Keyphrase Extraction

Authors: Vicky Min-How Lim, Siew Fan Wong, Tong Ming Lim

Abstract:

Automatic keyphrase extraction is useful in efficiently locating specific documents in online databases. While several techniques have been introduced over the years, improvement on accuracy rate is minimal. This research examines attribute scores for author-supplied keyphrases to better understand how the scores affect the accuracy rate of automatic keyphrase extraction. Five attributes are chosen for examination: Term Frequency, First Occurrence, Last Occurrence, Phrase Position in Sentences, and Term Cohesion Degree. The results show that First Occurrence is the most reliable attribute. Term Frequency, Last Occurrence and Term Cohesion Degree display a wide range of variation but are still usable with suggested tweaks. Only Phrase Position in Sentences shows a totally unpredictable pattern. The results imply that the commonly used ranking approach which directly extracts top ranked potential phrases from candidate keyphrase list as the keyphrases may not be reliable.

Keywords: Accuracy, Attribute Score, Author-supplied keyphrases, Automatic keyphrase extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1324
482 Categorizing Search Result Records Using Word Sense Disambiguation

Authors: R. Babisaraswathi, N. Shanthi, S. S. Kiruthika

Abstract:

Web search engines are designed to retrieve and extract the information in the web databases and to return dynamic web pages. The Semantic Web is an extension of the current web in which it includes semantic content in web pages. The main goal of semantic web is to promote the quality of the current web by changing its contents into machine understandable form. Therefore, the milestone of semantic web is to have semantic level information in the web. Nowadays, people use different keyword- based search engines to find the relevant information they need from the web. But many of the words are polysemous. When these words are used to query a search engine, it displays the Search Result Records (SRRs) with different meanings. The SRRs with similar meanings are grouped together based on Word Sense Disambiguation (WSD). In addition to that semantic annotation is also performed to improve the efficiency of search result records. Semantic Annotation is the process of adding the semantic metadata to web resources. Thus the grouped SRRs are annotated and generate a summary which describes the information in SRRs. But the automatic semantic annotation is a significant challenge in the semantic web. Here ontology and knowledge based representation are used to annotate the web pages.

Keywords: Ontology, Semantic Web, WordNet, Word Sense Disambiguation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
481 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics

Authors: Farhad Asadi, Mohammad Javad Mollakazemi

Abstract:

In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.

Keywords: Time series, fluctuation in statistical characteristics, optimal learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1789
480 Extended Well-Founded Semantics in Bilattices

Authors: Daniel Stamate

Abstract:

One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.

Keywords: Logic programs, imperfect information, multivalued logics, bilattices, assumptions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
479 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 627
478 Stop Texting While Learning: A Meta-Analysis of Social Networks Use and Academic Performances

Authors: Proud Arunrangsiwed, Sarinya Kongtieng

Abstract:

Teachers and university lecturers face an unsolved problem, which is students’ multitasking behaviors during class time, such as texting or playing a game. It is important to examine the most powerful predictor that can result in students’ educational performances. Meta-analysis was used to analyze the research articles, which were published with the keywords, multitasking, class performance, and texting. We selected 14 research articles published during 2008-2013 from online databases, and four articles met the predetermined inclusion criteria. Effect size of each pair of variables was used as the dependent variable. The findings revealed that the students’ expectancy and value on SNSs usages is the best significant predictor of their educational performances, followed by their motivation and ability in using SNSs, prior educational performances, usage behaviors of SNSs in class, and their personal characteristics, respectively. Future study should conduct a longitudinal design to better understand the effect of multitasking in the classroom.

Keywords: Meta-regression analysis, social networking site use, academic performance, multitasking, motivation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
477 2D Spherical Spaces for Face Relighting under Harsh Illumination

Authors: Amr Almaddah, Sadi Vural, Yasushi Mae, Kenichi Ohara, Tatsuo Arai

Abstract:

In this paper, we propose a robust face relighting technique by using spherical space properties. The proposed method is done for reducing the illumination effects on face recognition. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients. First, an internal training illumination database is generated by computing face albedo and face normal from 2D images under different lighting conditions. Based on the generated database, we analyze the target face pixels and compare them with the training bootstrap by using pre-generated tiles. In this work, practical real time processing speed and small image size were considered when designing the framework. In contrast to other works, our technique requires no 3D face models for the training process and takes a single 2D image as an input. Experimental results on publicly available databases show that the proposed technique works well under severe lighting conditions with significant improvements on the face recognition rates.

Keywords: Face synthesis and recognition, Face illumination recovery, 2D spherical spaces, Vision for graphics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
476 Microfluidic Paper-Based Electrochemical Biosensor

Authors: Ahmad Manbohi, Seyyed Hamid Ahmadi

Abstract:

A low-cost paper-based microfluidic device (PAD) for the multiplex electrochemical determination of glucose, uric acid, and dopamine in biological fluids was developed. Using wax printing, PAD containing a central zone, six channels, and six detection zones was fabricated, and the electrodes were printed on detection zones using pre-made electrodes template. For each analyte, two detection zones were used. The carbon working electrode was coated with chitosan-BSA (and enzymes for glucose and uric acid). To detect glucose and uric acid, enzymatic reactions were employed. These reactions involve enzyme-catalyzed redox reactions of the analytes and produce free electrons for electrochemical measurement. Calibration curves were linear (R² > 0.980) in the range of 0-80 mM for glucose, 0.09–0.9 mM for dopamine, and 0–50 mM for uric acid, respectively. Blood samples were successfully analyzed by the proposed method.

Keywords: Multiplex, microfluidic paper-based electrochemical biosensors, biomarkers, biological fluids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
475 Performance Evaluation of Data Mining Techniques for Predicting Software Reliability

Authors: Pradeep Kumar, Abdul Wahid

Abstract:

Accurate software reliability prediction not only enables developers to improve the quality of software but also provides useful information to help them for planning valuable resources. This paper examines the performance of three well-known data mining techniques (CART, TreeNet and Random Forest) for predicting software reliability. We evaluate and compare the performance of proposed models with Cascade Correlation Neural Network (CCNN) using sixteen empirical databases from the Data and Analysis Center for Software. The goal of our study is to help project managers to concentrate their testing efforts to minimize the software failures in order to improve the reliability of the software systems. Two performance measures, Normalized Root Mean Squared Error (NRMSE) and Mean Absolute Errors (MAE), illustrate that CART model is accurate than the models predicted using Random Forest, TreeNet and CCNN in all datasets used in our study. Finally, we conclude that such methods can help in reliability prediction using real-life failure datasets.

Keywords: Classification, Cascade Correlation Neural Network, Random Forest, Software reliability, TreeNet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
474 Effect of Different Methods of Soil Fertility on Grain Yield and Chickpea Quality

Authors: Mohammadi K., Ghalavand A., Aghaalikhani M

Abstract:

In order to evaluation the effects of natural, biological and chemical fertilizers on grain yield and chickpea quality, field experiments were carried out in 2007 and 2008 growing seasons. In this research the effects of different organic, chemical and biological fertilizers were investigated on grain yield and quality of chickpea. Experimental units were arranged in split-split plots based on randomized complete blocks with three replications. The highest amounts of yield and yield components were obtained in G1×N5 interaction. Significant increasing of N, P, K, Fe and Mg content in leaves and grains emphasized on superiority of mentioned treatment because each one of these nutrients has an approved role in chlorophyll synthesis and photosynthesis ability of the crop. The combined application of compost, farmyard manure and chemical phosphorus (N5) had the best grain quality due to high protein, starch and total sugar contents, low crude fiber and reduced cooking time.

Keywords: soil fertility, grain yield, chickpea, natural resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2571