Search results for: and document distances
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 414

Search results for: and document distances

384 WebGD: A CORBA-based Document Classification and Retrieval System on the Web

Authors: Fuyang Peng, Bo Deng, Chao Qi, Mou Zhan

Abstract:

This paper presents the design and implementation of the WebGD, a CORBA-based document classification and retrieval system on Internet. The WebGD makes use of such techniques as Web, CORBA, Java, NLP, fuzzy technique, knowledge-based processing and database technology. Unified classification and retrieval model, classifying and retrieving with one reasoning engine and flexible working mode configuration are some of its main features. The architecture of WebGD, the unified classification and retrieval model, the components of the WebGD server and the fuzzy inference engine are discussed in this paper in detail.

Keywords: Text Mining, document classification, knowledgeprocessing, fuzzy logic, Web, CORBA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
383 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
382 Automatic Enhanced Update Summary Generation System for News Documents

Authors: S. V. Kogilavani, C. S. Kanimozhiselvi, S. Malliga

Abstract:

Fast changing knowledge systems on the Internet can be accessed more efficiently with the help of automatic document summarization and updating techniques. The aim of multi-document update summary generation is to construct a summary unfolding the mainstream of data from a collection of documents based on the hypothesis that the user has already read a set of previous documents. In order to provide a lot of semantic information from the documents, deeper linguistic or semantic analysis of the source documents were used instead of relying only on document word frequencies to select important concepts. In order to produce a responsive summary, meaning oriented structural analysis is needed. To address this issue, the proposed system presents a document summarization approach based on sentence annotation with aspects, prepositions and named entities. Semantic element extraction strategy is used to select important concepts from documents which are used to generate enhanced semantic summary.

Keywords: Aspects, named entities, prepositions, update summary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2134
381 A Methodology for Automatic Diversification of Document Categories

Authors: Dasom Kim, Chen Liu, Myungsu Lim, Soo-Hyeon Jeon, Byeoung Kug Jeon, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, numerous documents including large volumes of unstructured data and text have been created because of the rapid increase in the use of social media and the Internet. Usually, these documents are categorized for the convenience of users. Because the accuracy of manual categorization is not guaranteed, and such categorization requires a large amount of time and incurs huge costs. Many studies on automatic categorization have been conducted to help mitigate the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorize complex documents with multiple topics because they work on the assumption that individual documents can be categorized into single categories only. Therefore, to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, the learning process employed in these studies involves training using a multi-categorized document set. These methods therefore cannot be applied to the multi-categorization of most documents unless multi-categorized training sets using traditional multi-categorization algorithms are provided. To overcome this limitation, in this study, we review our novel methodology for extending the category of a single-categorized document to multiple categorizes, and then introduce a survey-based verification scenario for estimating the accuracy of our automatic categorization methodology.

Keywords: Big Data Analysis, Document Classification, Text Mining, Topic Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
380 Restoration of Noisy Document Images with an Efficient Bi-Level Adaptive Thresholding

Authors: Abhijit Mitra

Abstract:

An effective approach for extracting document images from a noisy background is introduced. The entire scheme is divided into three sub- stechniques – the initial preprocessing operations for noise cluster tightening, introduction of a new thresholding method by maximizing the ratio of stan- dard deviations of the combined effect on the image to the sum of weighted classes and finally the image restoration phase by image binarization utiliz- ing the proposed optimum threshold level. The proposed method is found to be efficient compared to the existing schemes in terms of computational complexity as well as speed with better noise rejection.

Keywords: Document image extraction, Preprocessing, Ratio of stan-dard deviations, Bi-level adaptive thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
379 Fast Document Segmentation Using Contourand X-Y Cut Technique

Authors: Boontee Kruatrachue, Narongchai Moongfangklang, Kritawan Siriboon

Abstract:

This paper describes fast and efficient method for page segmentation of document containing nonrectangular block. The segmentation is based on edge following algorithm using small window of 16 by 32 pixels. This segmentation is very fast since only border pixels of paragraph are used without scanning the whole page. Still, the segmentation may contain error if the space between them is smaller than the window used in edge following. Consequently, this paper reduce this error by first identify the missed segmentation point using direction information in edge following then, using X-Y cut at the missed segmentation point to separate the connected columns. The advantage of the proposed method is the fast identification of missed segmentation point. This methodology is faster with fewer overheads than other algorithms that need to access much more pixel of a document.

Keywords: Contour Direction Technique, Missed SegmentationPoints, Page Segmentation, Recursive X-Y Cut Technique

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2785
378 Design, Construction and Performance Evaluation of a HPGe Detector Shield

Authors: M. Sharifi, M. Mirzaii, F. Bolourinovin, H. Yousefnia, M. Akbari, K. Yousefi-Mojir

Abstract:

A multilayer passive shield composed of low-activity lead (Pb), copper (Cu), tin (Sn) and iron (Fe) was designed and manufactured for a coaxial HPGe detector placed at a surface laboratory for reducing background radiation and radiation dose to the personnel. The performance of the shield was evaluated and efficiency curves of the detector were plotted by using of various standard sources in different distances. Monte Carlo simulations and a set of TLD chips were used for dose estimation in two distances of 20 and 40 cm. The results show that the shield reduced background spectrum and the personnel dose more than 95%.

Keywords: HPGe shield, background count, personnel dose, efficiency curve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2938
377 A Geometrical Perspective on the Insulin Evolution

Authors: Yuhei Kunihiro, Sorin V. Sabau, Kazuhiro Shibuya

Abstract:

We study the molecular evolution of insulin from metric geometry point of view. In mathematics, and in particular in geometry, distances and metrics between objects are of fundamental importance. Using a weaker notion than the classical distance, namely the weighted quasi-metrics, one can study the geometry of biological sequences (DNA, mRNA, or proteins) space. We analyze from geometrical point of view a family of 60 insulin homologous sequences ranging on a large variety of living organisms from human to the nematode C. elegans. We show that the distances between sequences provide important information about the evolution and function of insulin.

Keywords: Metric geometry, evolution, insulin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
376 Web Search Engine Based Naming Procedure for Independent Topic

Authors: Takahiro Nishigaki, Takashi Onoda

Abstract:

In recent years, the number of document data has been increasing since the spread of the Internet. Many methods have been studied for extracting topics from large document data. We proposed Independent Topic Analysis (ITA) to extract topics independent of each other from large document data such as newspaper data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis. The topic represented by ITA is represented by a set of words. However, the set of words is quite different from the topics the user imagines. For example, the top five words with high independence of a topic are as follows. Topic1 = {"scor", "game", "lead", "quarter", "rebound"}. This Topic 1 is considered to represent the topic of "SPORTS". This topic name "SPORTS" has to be attached by the user. ITA cannot name topics. Therefore, in this research, we propose a method to obtain topics easy for people to understand by using the web search engine, topics given by the set of words given by independent topic analysis. In particular, we search a set of topical words, and the title of the homepage of the search result is taken as the topic name. And we also use the proposed method for some data and verify its effectiveness.

Keywords: Independent topic analysis, topic extraction, topic naming, web search engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 500
375 Key Based Text Watermarking of E-Text Documents in an Object Based Environment Using Z-Axis for Watermark Embedding

Authors: Mussarat Abdullah, Fazal Wahab

Abstract:

Data hiding into text documents itself involves pretty complexities due to the nature of text documents. A robust text watermarking scheme targeting an object based environment is presented in this research. The heart of the proposed solution describes the concept of watermarking an object based text document where each and every text string is entertained as a separate object having its own set of properties. Taking advantage of the z-ordering of objects watermark is applied with the z-axis letting zero fidelity disturbances to the text. Watermark sequence of bits generated against user key is hashed with selected properties of given document, to determine the bit sequence to embed. Bits are embedded along z-axis and the document has no fidelity issues when printed, scanned or photocopied.

Keywords: Digital Watermarking, Object Based Environment, Watermark, z-ordering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
374 Influence of the Seat Arrangement in Public Reading Spaces on Individual Subjective Perceptions

Authors: Jo-Han Chang, Chung-Jung Wu

Abstract:

This study involves a design proposal. The objective of is to create a seat arrangement model for public reading spaces that enable free arrangement without disturbing the users. Through a subjective perception scale, this study explored whether distance between seats and direction of seats influence individual subjective perceptions in a public reading space. This study also involves analysis of user subjective perceptions when reading in the settings on 3 seats at different directions and with 5 distances between seats. The results may be applied to public chair design. This study investigated that (a) whether different directions of seats and distances between seats influence individual subjective perceptions and (b) the acceptable personal space between 2 strangers in a public reading space. The results are shown as follows: (a) the directions of seats and distances between seats influenced individual subjective perceptions. (b) subjective evaluation scores were higher for back-to-back seat directions with Distances A (10cm) and B (62cm) compared with face-to-face and side-by-side seat directions; however, when the seat distance exceeded 114cm (Distance C), no difference existed among the directions of seats. (c) regarding reading in public spaces, when the distance between seats is 10cm only, we recommend arranging the seats in a back-to-back fashion to increase user comfort and arrangement of face-to-face and side- by-side seat directions should be avoided. When the seatarrangement is limited to face-to-face design, the distance between seats should be increased to at least 62cm. Moreover, the distance between seats should be increased to at least 114cm for side- by-side seats to elevate user comfort.

Keywords: Individual Subjective Perceptions, Personal Space, Seat Arrangement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923
373 One-Class Support Vector Machine for Sentiment Analysis of Movie Review Documents

Authors: Chothmal, Basant Agarwal

Abstract:

Sentiment analysis means to classify a given review document into positive or negative polar document. Sentiment analysis research has been increased tremendously in recent times due to its large number of applications in the industry and academia. Sentiment analysis models can be used to determine the opinion of the user towards any entity or product. E-commerce companies can use sentiment analysis model to improve their products on the basis of users’ opinion. In this paper, we propose a new One-class Support Vector Machine (One-class SVM) based sentiment analysis model for movie review documents. In the proposed approach, we initially extract features from one class of documents, and further test the given documents with the one-class SVM model if a given new test document lies in the model or it is an outlier. Experimental results show the effectiveness of the proposed sentiment analysis model.

Keywords: Feature selection methods, Machine learning, NB, One-class SVM, Sentiment Analysis, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3303
372 Stock Market Prediction by Regression Model with Social Moods

Authors: Masahiro Ohmura, Koh Kakusho, Takeshi Okadome

Abstract:

This paper presents a regression model with autocorrelated errors in which the inputs are social moods obtained by analyzing the adjectives in Twitter posts using a document topic model, where document topics are extracted using LDA. The regression model predicts Dow Jones Industrial Average (DJIA) more precisely than autoregressive moving-average models.

Keywords: Regression model, social mood, stock market prediction, Twitter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434
371 New Coordinate System for Countries with Big Territories

Authors: Mohammed Sabri Ali Akresh

Abstract:

The modern technologies and developments in computer and Global Positioning System (GPS) as well as Geographic Information System (GIS) and total station TS. This paper presents a new proposal for coordinates system by a harmonic equations “United projections”, which have five projections (Mercator, Lambert, Russell, Lagrange, and compound of projection) in one zone coordinate system width 14 degrees, also it has one degree for overlap between zones, as well as two standards parallels for zone from 10 S to 45 S. Also this paper presents two cases; first case is to compare distances between a new coordinate system and UTM, second case creating local coordinate system for the city of Sydney to measure the distances directly from rectangular coordinates using projection of Mercator, Lambert and UTM.

Keywords: Harmonic equations, coordinate system, projections, algorithms and parallels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
370 An Agent Oriented Architecture to Supply Dynamic Document Generation in ERP Systems

Authors: Hassan Haghighi, Seyedeh Zahra Hosseini, Seyedeh Elahe Jalambadani

Abstract:

One of the most important aspects expected from an ERP system is to mange user\administrator manual documents dynamically. Since an ERP package is frequently changed during its implementation in customer sites, it is often needed to add new documents and/or apply required changes to existing documents in order to cover new or changed capabilities. The worse is that since these changes occur continuously, the corresponding documents should be updated dynamically; otherwise, implementing the ERP package in the organization encounters serious risks. In this paper, we propose a new architecture which is based on the agent oriented vision and supplies the dynamic document generation expected from ERP systems using several independent but cooperative agents. Beside the dynamic document generation which is the main issue of this paper, the presented architecture will address some aspects of intelligence and learning capabilities existing in ERP.

Keywords: enterprise resource planning, dynamic documentgeneration, software architecture, agent oriented architecture, learning, intelligence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
369 Graph-Based Text Similarity Measurement by Exploiting Wikipedia as Background Knowledge

Authors: Lu Zhang, Chunping Li, Jun Liu, Hui Wang

Abstract:

Text similarity measurement is a fundamental issue in many textual applications such as document clustering, classification, summarization and question answering. However, prevailing approaches based on Vector Space Model (VSM) more or less suffer from the limitation of Bag of Words (BOW), which ignores the semantic relationship among words. Enriching document representation with background knowledge from Wikipedia is proven to be an effective way to solve this problem, but most existing methods still cannot avoid similar flaws of BOW in a new vector space. In this paper, we propose a novel text similarity measurement which goes beyond VSM and can find semantic affinity between documents. Specifically, it is a unified graph model that exploits Wikipedia as background knowledge and synthesizes both document representation and similarity computation. The experimental results on two different datasets show that our approach significantly improves VSM-based methods in both text clustering and classification.

Keywords: Text classification, Text clustering, Text similarity, Wikipedia

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
368 Experiments on Element and Document Statistics for XML Retrieval

Authors: Mohamed Ben Aouicha, Mohamed Tmar, Mohand Boughanem, Mohamed Abid

Abstract:

This paper presents an information retrieval model on XML documents based on tree matching. Queries and documents are represented by extended trees. An extended tree is built starting from the original tree, with additional weighted virtual links between each node and its indirect descendants allowing to directly reach each descendant. Therefore only one level separates between each node and its indirect descendants. This allows to compare the user query and the document with flexibility and with respect to the structural constraints of the query. The content of each node is very important to decide weither a document element is relevant or not, thus the content should be taken into account in the retrieval process. We separate between the structure-based and the content-based retrieval processes. The content-based score of each node is commonly based on the well-known Tf × Idf criteria. In this paper, we compare between this criteria and another one we call Tf × Ief. The comparison is based on some experiments into a dataset provided by INEX1 to show the effectiveness of our approach on one hand and those of both weighting functions on the other.

Keywords: XML retrieval, INEX, Tf × Idf, Tf × Ief

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1337
367 Estimation of Skew Angle in Binary Document Images Using Hough Transform

Authors: Nandini N., Srikanta Murthy K., G. Hemantha Kumar

Abstract:

This paper includes two novel techniques for skew estimation of binary document images. These algorithms are based on connected component analysis and Hough transform. Both these methods focus on reducing the amount of input data provided to Hough transform. In the first method, referred as word centroid approach, the centroids of selected words are used for skew detection. In the second method, referred as dilate & thin approach, the selected characters are blocked and dilated to get word blocks and later thinning is applied. The final image fed to Hough transform has the thinned coordinates of word blocks in the image. The methods have been successful in reducing the computational complexity of Hough transform based skew estimation algorithms. Promising experimental results are also provided to prove the effectiveness of the proposed methods.

Keywords: Dilation, Document processing, Hough transform, Optical Character Recognition, Skew estimation, and Thinning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3267
366 Schema and Data Migration of a Relational Database RDB to the Extensible Markup Language XML

Authors: Alae El Alami, Mohamed Bahaj

Abstract:

This article discusses the passage of RDB to XML documents (schema and data) based on metadata and semantic enrichment, which makes the RDB under flattened shape and is enriched by the object concept. The integration and exploitation of the object concept in the XML uses a syntax allowing for the verification of the conformity of the document XML during the creation. The information extracted from the RDB is therefore analyzed and filtered in order to adjust according to the structure of the XML files and the associated object model. Those implemented in the XML document through a SQL query are built dynamically. A prototype was implemented to realize automatic migration, and so proves the effectiveness of this particular approach.

Keywords: RDB, XML, DTD, semantic enrichment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
365 Bottom Up Text Mining through Hierarchical Document Representation

Authors: Y. Djouadi., F. Souam.

Abstract:

Most of the existing text mining approaches are proposed, keeping in mind, transaction databases model. Thus, the mined dataset is structured using just one concept: the “transaction", whereas the whole dataset is modeled using the “set" abstract type. In such cases, the structure of the whole dataset and the relationships among the transactions themselves are not modeled and consequently, not considered in the mining process. We believe that taking into account structure properties of hierarchically structured information (e.g. textual document, etc ...) in the mining process, can leads to best results. For this purpose, an hierarchical associations rule mining approach for textual documents is proposed in this paper and the classical set-oriented mining approach is reconsidered profits to a Direct Acyclic Graph (DAG) oriented approach. Natural languages processing techniques are used in order to obtain the DAG structure. Based on this graph model, an hierarchical bottom up algorithm is proposed. The main idea is that each node is mined with its parent node.

Keywords: Graph based association rules mining, Hierarchical document structure, Text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
364 OWA Operators in Generalized Distances

Authors: José M. Merigó, Anna M. Gil-Lafuente

Abstract:

Different types of aggregation operators such as the ordered weighted quasi-arithmetic mean (Quasi-OWA) operator and the normalized Hamming distance are studied. We introduce the use of the OWA operator in generalized distances such as the quasiarithmetic distance. We will call these new distance aggregation the ordered weighted quasi-arithmetic distance (Quasi-OWAD) operator. We develop a general overview of this type of generalization and study some of their main properties such as the distinction between descending and ascending orders. We also consider different families of Quasi-OWAD operators such as the Minkowski ordered weighted averaging distance (MOWAD) operator, the ordered weighted averaging distance (OWAD) operator, the Euclidean ordered weighted averaging distance (EOWAD) operator, the normalized quasi-arithmetic distance, etc.

Keywords: Aggregation operators, Distance measures, Quasi- OWA operator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1662
363 Implementation of RSA Blind Signature on CryptO-0N2 Protocol

Authors: Esti Rahmawati Agustina, Is Esti Firmanesa

Abstract:

Blind Signature were introduced by Chaum. In this scheme, a signer can “sign” a document without knowing the document contain. This is particularly important in electronic voting. CryptO-0N2 is an electronic voting protocol which is development of CryptO-0N. During its development this protocol has not been furnished with the requirement of blind signature, so the choice of voters can be determined by counting center. In this paper will be presented of implementation of blind signature using RSA algorithm.

Keywords: Blind signature, electronic voting protocol, RSA algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3192
362 Ontology-based Concept Weighting for Text Documents

Authors: Hmway Hmway Tar, Thi Thi Soe Nyaunt

Abstract:

Documents clustering become an essential technology with the popularity of the Internet. That also means that fast and high-quality document clustering technique play core topics. Text clustering or shortly clustering is about discovering semantically related groups in an unstructured collection of documents. Clustering has been very popular for a long time because it provides unique ways of digesting and generalizing large amounts of information. One of the issues of clustering is to extract proper feature (concept) of a problem domain. The existing clustering technology mainly focuses on term weight calculation. To achieve more accurate document clustering, more informative features including concept weight are important. Feature Selection is important for clustering process because some of the irrelevant or redundant feature may misguide the clustering results. To counteract this issue, the proposed system presents the concept weight for text clustering system developed based on a k-means algorithm in accordance with the principles of ontology so that the important of words of a cluster can be identified by the weight values. To a certain extent, it has resolved the semantic problem in specific areas.

Keywords: Clustering, Concept Weight, Document clustering, Feature Selection, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2406
361 Automatic Building an Extensive Arabic FA Terms Dictionary

Authors: El-Sayed Atlam, Masao Fuketa, Kazuhiro Morita, Jun-ichi Aoe

Abstract:

Field Association (FA) terms are a limited set of discriminating terms that give us the knowledge to identify document fields which are effective in document classification, similar file retrieval and passage retrieval. But the problem lies in the lack of an effective method to extract automatically relevant Arabic FA Terms to build a comprehensive dictionary. Moreover, all previous studies are based on FA terms in English and Japanese, and the extension of FA terms to other language such Arabic could be definitely strengthen further researches. This paper presents a new method to extract, Arabic FA Terms from domain-specific corpora using part-of-speech (POS) pattern rules and corpora comparison. Experimental evaluation is carried out for 14 different fields using 251 MB of domain-specific corpora obtained from Arabic Wikipedia dumps and Alhyah news selected average of 2,825 FA Terms (single and compound) per field. From the experimental results, recall and precision are 84% and 79% respectively. Therefore, this method selects higher number of relevant Arabic FA Terms at high precision and recall.

Keywords: Arabic Field Association Terms, information extraction, document classification, information retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
360 Physical Modeling of Oil Well Fire Extinguishing Using a Turbojet on a Barge

Authors: M. Abbaspour, D. Mansouri, N. Mansouri

Abstract:

There are reports of gas and oil wells fire due to different accidents. Many different methods are used for fire fighting in gas and oil industry. Traditional fire extinguishing techniques are mostly faced with many problems and are usually time consuming and needs lots of equipments. Besides, they cause damages to facilities, and create health and environmental problems. This article proposes innovative approach in fire extinguishing techniques in oil and gas industry, especially applicable for burning oil wells located offshore. Fire extinguishment employing a turbojet is a novel approach which can help to extinguishment the fire in short period of time. Divergent and convergent turbojets modeled in laboratory scale along with a high pressure flame were used. Different experiments were conducted to determine the relationship between output discharges of trumpet and oil wells. The results were corrected and the relationship between dimensionless parameters of flame and fire extinguishment distances and also the output discharge of turbojet and oil wells in specified distances are demonstrated by specific curves.

Keywords: Burning well, fire extinguishment, gas/oil industry, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
359 Graphic Watermarking, Security Feature in Cadastral Content Management

Authors: Manole Velicanu, Emanuil Rednic

Abstract:

The paper shows the necessity to increase the security level for paper management in the cadastral field by using specific graphical watermarks. Using the graphical watermarking will increase the security in the cadastral content management; furthermore any altered document will be validated afterwards of its originality by checking the graphic watermark. If, by any reasons the document is changed for counterfeiting, it is invalidated and found that is an illegal copy due to the graphic check of the watermarking, check made at pixel level

Keywords: cadastral system, database security, security standards, content management, identity management, watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
358 Multi-agent Data Fusion Architecture for Intelligent Web Information Retrieval

Authors: Amin Milani Fard, Mohsen Kahani, Reza Ghaemi, Hamid Tabatabaee

Abstract:

In this paper we propose a multi-agent architecture for web information retrieval using fuzzy logic based result fusion mechanism. The model is designed in JADE framework and takes advantage of JXTA agent communication method to allow agent communication through firewalls and network address translators. This approach enables developers to build and deploy P2P applications through a unified medium to manage agent-based document retrieval from multiple sources.

Keywords: Information retrieval systems, list fusion methods, document score, multi-agent systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1601
357 Spatial Query Localization Method in Limited Reference Point Environment

Authors: Victor Krebss

Abstract:

Task of object localization is one of the major challenges in creating intelligent transportation. Unfortunately, in densely built-up urban areas, localization based on GPS only produces a large error, or simply becomes impossible. New opportunities arise for the localization due to the rapidly emerging concept of a wireless ad-hoc network. Such network, allows estimating potential distance between these objects measuring received signal level and construct a graph of distances in which nodes are the localization objects, and edges - estimates of the distances between pairs of nodes. Due to the known coordinates of individual nodes (anchors), it is possible to determine the location of all (or part) of the remaining nodes of the graph. Moreover, road map, available in digital format can provide localization routines with valuable additional information to narrow node location search. However, despite abundance of well-known algorithms for solving the problem of localization and significant research efforts, there are still many issues that currently are addressed only partially. In this paper, we propose localization approach based on the graph mapped distances on the digital road map data basis. In fact, problem is reduced to distance graph embedding into the graph representing area geo location data. It makes possible to localize objects, in some cases even if only one reference point is available. We propose simple embedding algorithm and sample implementation as spatial queries over sensor network data stored in spatial database, allowing employing effectively spatial indexing, optimized spatial search routines and geometry functions.

Keywords: Intelligent Transportation System, Sensor Network, Localization, Spatial Query, GIS, Graph Embedding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
356 Mining News Sites to Create Special Domain News Collections

Authors: David B. Bracewell, Fuji Ren, Shingo Kuroiwa

Abstract:

We present a method to create special domain collections from news sites. The method only requires a single sample article as a seed. No prior corpus statistics are needed and the method is applicable to multiple languages. We examine various similarity measures and the creation of document collections for English and Japanese. The main contributions are as follows. First, the algorithm can build special domain collections from as little as one sample document. Second, unlike other algorithms it does not require a second “general" corpus to compute statistics. Third, in our testing the algorithm outperformed others in creating collections made up of highly relevant articles.

Keywords: Information Retrieval, News, Special DomainCollections,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
355 Association Rules Mining and NOSQL Oriented Document in Big Data

Authors: Sarra Senhadji, Imene Benzeguimi, Zohra Yagoub

Abstract:

Big Data represents the recent technology of manipulating voluminous and unstructured data sets over multiple sources. Therefore, NOSQL appears to handle the problem of unstructured data. Association rules mining is one of the popular techniques of data mining to extract hidden relationship from transactional databases. The algorithm for finding association dependencies is well-solved with Map Reduce. The goal of our work is to reduce the time of generating of frequent itemsets by using Map Reduce and NOSQL database oriented document. A comparative study is given to evaluate the performances of our algorithm with the classical algorithm Apriori.

Keywords: Apriori, Association rules mining, Big Data, data mining, Hadoop, Map Reduce, MongoDB, NoSQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694