Search results for: unsupervised extractive text summarization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1451

Search results for: unsupervised extractive text summarization

1451 Programmed Speech to Text Summarization Using Graph-Based Algorithm

Authors: Hamsini Pulugurtha, P. V. S. L. Jagadamba

Abstract:

Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculations

Keywords: Siamese neural network, English speech, English text, natural language processing, unsupervised extractive text summarization

Procedia PDF Downloads 183
1450 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 45
1449 Long Short-Term Memory (LSTM) Matters: A Sequential Brief Text that Assistive Approach of Text Summarization

Authors: Sharun Akter Khushbu

Abstract:

‘SOS’ addresses text summary such as feasibility study and allows more comprehensive methods on text of language resources. Resources language has been exploited by the importance of text documental procedure. Throughout this key idea will come out a machine interpreter called an SOS that has built an argumentative as an employed model is LSTM-CNN(long short-term memory- recurrent neural network). Summarization of Bengali text formulated by the information of latent structure instead of brief input string counting as text. Text summarization is the proper utilization of optimal solutions being time reduction, and easy interpretation whenever human-generated summary and machine targeted summary remain similar and without degrading the semantic summarization quality. According to the problem affirmation key idea has advanced an algorithm with the method of encoder and decoder describing a sequential structure that is rigorously connected with actual predicted and meaningful output. Regarding the seq2seq approach aimed in the future with high semantic summarization similarity on behalf of the large data samples that are also enlisted by the method. Thus, the SOS method assigns a discriminator over Bengali text documents where encoded input sequences such as summary and decoded the targeted summary of gist will be an error-free machine.

Keywords: LSTM-CNN, NN, SOS, text summarization

Procedia PDF Downloads 42
1448 Optimized Text Summarization Model on Mobile Screens for Sight-Interpreters: An Empirical Study

Authors: Jianhua Wang

Abstract:

To obtain key information quickly from long texts on small screens of mobile devices, sight-interpreters need to establish optimized summarization model for fast information retrieval. Four summarization models based on previous studies were studied including title+key words (TKW), title+topic sentences (TTS), key words+topic sentences (KWTS) and title+key words+topic sentences (TKWTS). Psychological experiments were conducted on the four models for three different genres of interpreting texts to establish the optimized summarization model for sight-interpreters. This empirical study shows that the optimized summarization model for sight-interpreters to quickly grasp the key information of the texts they interpret is title+key words (TKW) for cultural texts, title+key words+topic sentences (TKWTS) for economic texts and topic sentences+key words (TSKW) for political texts.

Keywords: different genres, mobile screens, optimized summarization models, sight-interpreters

Procedia PDF Downloads 285
1447 Video Summarization: Techniques and Applications

Authors: Zaynab El Khattabi, Youness Tabii, Abdelhamid Benkaddour

Abstract:

Nowadays, huge amount of multimedia repositories make the browsing, retrieval and delivery of video contents very slow and even difficult tasks. Video summarization has been proposed to improve faster browsing of large video collections and more efficient content indexing and access. In this paper, we focus on approaches to video summarization. The video summaries can be generated in many different forms. However, two fundamentals ways to generate summaries are static and dynamic. We present different techniques for each mode in the literature and describe some features used for generating video summaries. We conclude with perspective for further research.

Keywords: video summarization, static summarization, video skimming, semantic features

Procedia PDF Downloads 372
1446 Surveillance Video Summarization Based on Histogram Differencing and Sum Conditional Variance

Authors: Nada Jasim Habeeb, Rana Saad Mohammed, Muntaha Khudair Abbass

Abstract:

For more efficient and fast video summarization, this paper presents a surveillance video summarization method. The presented method works to improve video summarization technique. This method depends on temporal differencing to extract most important data from large video stream. This method uses histogram differencing and Sum Conditional Variance which is robust against to illumination variations in order to extract motion objects. The experimental results showed that the presented method gives better output compared with temporal differencing based summarization techniques.

Keywords: temporal differencing, video summarization, histogram differencing, sum conditional variance

Procedia PDF Downloads 321
1445 EduEasy: Smart Learning Assistant System

Authors: A. Karunasena, P. Bandara, J. A. T. P. Jayasuriya, P. D. Gallage, J. M. S. D. Jayasundara, L. A. P. Y. P. Nuwanjaya

Abstract:

Usage of smart learning concepts has increased rapidly all over the world recently as better teaching and learning methods. Most educational institutes such as universities are experimenting those concepts with their students. Smart learning concepts are especially useful for students to learn better in large classes. In large classes, the lecture method is the most popular method of teaching. In the lecture method, the lecturer presents the content mostly using lecture slides, and the students make their own notes based on the content presented. However, some students may find difficulties with the above method due to various issues such as speed in delivery. The purpose of this research is to assist students in large classes in the following content. The research proposes a solution with four components, namely note-taker, slide matcher, reference finder, and question presenter, which are helpful for the students to obtain a summarized version of the lecture note, easily navigate to the content and find resources, and revise content using questions.

Keywords: automatic summarization, extractive text summarization, speech recognition library, sentence extraction, automatic web search, automatic question generator, sentence scoring, the term weight

Procedia PDF Downloads 120
1444 Feature-Based Summarizing and Ranking from Customer Reviews

Authors: Dim En Nyaung, Thin Lai Lai Thein

Abstract:

Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.

Keywords: opinion mining, opinion summarization, sentiment analysis, text mining

Procedia PDF Downloads 307
1443 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 108
1442 Linguistic Summarization of Structured Patent Data

Authors: E. Y. Igde, S. Aydogan, F. E. Boran, D. Akay

Abstract:

Patent data have an increasingly important role in economic growth, innovation, technical advantages and business strategies and even in countries competitions. Analyzing of patent data is crucial since patents cover large part of all technological information of the world. In this paper, we have used the linguistic summarization technique to prove the validity of the hypotheses related to patent data stated in the literature.

Keywords: data mining, fuzzy sets, linguistic summarization, patent data

Procedia PDF Downloads 248
1441 Olefin and Paraffin Separation Using Simulations on Extractive Distillation

Authors: Muhammad Naeem, Abdulrahman A. Al-Rabiah

Abstract:

Technical mixture of C4 containing 1-butene and n-butane are very close to each other with respect to their boiling points i.e. -6.3°C for 1-butene and -1°C for n-butane. Extractive distillation process is used for the separation of 1-butene from the existing mixture of C4. The solvent is the essential of extractive distillation, and an appropriate solvent shows an important role in the process economy of extractive distillation. Aspen Plus has been applied for the separation of these hydrocarbons as a simulator; moreover NRTL activity coefficient model was used in the simulation. This model indicated that the material balances in this separation process were accurate for several solvent flow rates. Mixture of acetonitrile and water used as a solvent and 99 % pure 1-butene was separated. This simulation proposed the ratio of the feed to solvent as 1 : 7.9 and 15 plates for the solvent recovery column, previously feed to solvent ratio was more than this and the proposed plates were 30, which can economize the separation process.

Keywords: extractive distillation, 1-butene, Aspen Plus, ACN solvent

Procedia PDF Downloads 412
1440 Unsupervised Domain Adaptive Text Retrieval with Query Generation

Authors: Rui Yin, Haojie Wang, Xun Li

Abstract:

Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.

Keywords: dense retrieval, query generation, unsupervised training, text retrieval

Procedia PDF Downloads 39
1439 Mask-Prompt-Rerank: An Unsupervised Method for Text Sentiment Transfer

Authors: Yufen Qin

Abstract:

Text sentiment transfer is an important branch of text style transfer. The goal is to generate text with another sentiment attribute based on a text with a specific sentiment attribute while maintaining the content and semantic information unrelated to sentiment unchanged in the process. There are currently two main challenges in this field: no parallel corpus and text attribute entanglement. In response to the above problems, this paper proposed a novel solution: Mask-Prompt-Rerank. Use the method of masking the sentiment words and then using prompt regeneration to transfer the sentence sentiment. Experiments on two sentiment benchmark datasets and one formality transfer benchmark dataset show that this approach makes the performance of small pre-trained language models comparable to that of the most advanced large models, while consuming two orders of magnitude less computing and memory.

Keywords: language model, natural language processing, prompt, text sentiment transfer

Procedia PDF Downloads 48
1438 Process Simulation of 1-Butene Separation from C4 Mixture by Extractive Distillation

Authors: Muhammad Naeem, Abdulrahman A. Al-Rabiah, Wasif Mughees

Abstract:

Technical mixture of C4 containing 1-butene and n-butane are very close to each other with regard to their boiling points i.e. -6.3°C for 1-butene and -1°C for n-butane. Extractive distillation process is used for the separation of 1-butene from the existing mixture of C4. The solvent is the essential of extractive distillation, and an appropriate solvent plays an important role in the process economy of extractive distillation. Aspen Plus has been applied for the separation of these hydrocarbons as a simulator. Moreover, NRTL activity coefficient model was used in the simulation. This model indicated that the material balances in this separation process were accurate for several solvent flow rates. Mixture of acetonitrile and water used as a solvent and 99% pure 1-butene was separated. This simulation proposed the ratio of the feed to solvent as 1: 7.9 and 15 plates for the solvent recovery column. Previously feed to solvent ratio was more than this and the number of proposed plates were 30, which shows that the separation process can be economized.

Keywords: extractive distillation, 1-butene, aspen plus, ACN solvent

Procedia PDF Downloads 500
1437 Unsupervised Part-of-Speech Tagging for Amharic Using K-Means Clustering

Authors: Zelalem Fantahun

Abstract:

Part-of-speech tagging is the process of assigning a part-of-speech or other lexical class marker to each word into naturally occurring text. Part-of-speech tagging is the most fundamental and basic task almost in all natural language processing. In natural language processing, the problem of providing large amount of manually annotated data is a knowledge acquisition bottleneck. Since, Amharic is one of under-resourced language, the availability of tagged corpus is the bottleneck problem for natural language processing especially for POS tagging. A promising direction to tackle this problem is to provide a system that does not require manually tagged data. In unsupervised learning, the learner is not provided with classifications. Unsupervised algorithms seek out similarity between pieces of data in order to determine whether they can be characterized as forming a group. This paper explicates the development of unsupervised part-of-speech tagger using K-Means clustering for Amharic language since large amount of data is produced in day-to-day activities. In the development of the tagger, the following procedures are followed. First, the unlabeled data (raw text) is divided into 10 folds and tokenization phase takes place; at this level, the raw text is chunked at sentence level and then into words. The second phase is feature extraction which includes word frequency, syntactic and morphological features of a word. The third phase is clustering. Among different clustering algorithms, K-means is selected and implemented in this study that brings group of similar words together. The fourth phase is mapping, which deals with looking at each cluster carefully and the most common tag is assigned to a group. This study finds out two features that are capable of distinguishing one part-of-speech from others these are morphological feature and positional information and show that it is possible to use unsupervised learning for Amharic POS tagging. In order to increase performance of the unsupervised part-of-speech tagger, there is a need to incorporate other features that are not included in this study, such as semantic related information. Finally, based on experimental result, the performance of the system achieves a maximum of 81% accuracy.

Keywords: POS tagging, Amharic, unsupervised learning, k-means

Procedia PDF Downloads 415
1436 Recovery of Acetonitrile from Aqueous Solutions by Extractive Distillation: The Effect of Entrainer

Authors: Aleksandra Y. Sazonova, Valentina M. Raeva

Abstract:

The aim of this work was to apply extractive distillation for acetonitrile removal from water solutions, to validate thermodynamic criterion based on excess Gibbs energy to entrainer selection process for acetonitrile – water mixture separation and show its potential efficiency at isothermal conditions as well as at isobaric (conditions of real distillation process), to simulate and analyze an extractive distillation process with chosen entrainers: optimize amount of trays and feeds, entrainer/original mixture and reflux ratios. Equimolar composition of the feed stream was chosen for the process, comparison of the energy consumptions was carried out. Glycerol was suggested as the most energetically and ecologically suitable entrainer.

Keywords: acetonitrile, entrainer, extractive distillation, water

Procedia PDF Downloads 243
1435 Simulation Data Summarization Based on Spatial Histograms

Authors: Jing Zhao, Yoshiharu Ishikawa, Chuan Xiao, Kento Sugiura

Abstract:

In order to analyze large-scale scientific data, research on data exploration and visualization has gained popularity. In this paper, we focus on the exploration and visualization of scientific simulation data, and define a spatial V-Optimal histogram for data summarization. We propose histogram construction algorithms based on a general binary hierarchical partitioning as well as a more specific one, the l-grid partitioning. For effective data summarization and efficient data visualization in scientific data analysis, we propose an optimal algorithm as well as a heuristic algorithm for histogram construction. To verify the effectiveness and efficiency of the proposed methods, we conduct experiments on the massive evacuation simulation data.

Keywords: simulation data, data summarization, spatial histograms, exploration, visualization

Procedia PDF Downloads 155
1434 Extractive Fermentation of Ethanol Using Vacuum Fractionation Technique

Authors: Weeraya Samnuknit, Apichat Boontawan

Abstract:

A vacuum fractionation technique was introduced to remove ethanol from fermentation broth. The effect of initial glucose and ethanol concentrations were investigated for specific productivity. The inhibitory ethanol concentration was observed at 100 g/L. In order to increase the fermentation performance, the ethanol product was removed as soon as it is produced. The broth was boiled at 35°C by reducing the pressure to 65 mBar. The ethanol/water vapor was fractionated for up to 90 wt% before leaving the column. Ethanol concentration in the broth was kept lower than 25 g/L, thus minimized the product inhibition effect to the yeast cells. For batch extractive fermentation, a high substrate utilization rate was obtained at 26.6 g/L.h and most of glucose was consumed within 21 h. For repeated-batch extractive fermentation, addition of glucose was carried out up to 9 times and ethanol was produced more than 8-fold higher than batch fermentation.

Keywords: ethanol, extractive fermentation, product inhibition, vacuum fractionation

Procedia PDF Downloads 222
1433 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 101
1432 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach

Authors: Kanika Gupta, Ashok Kumar

Abstract:

Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.

Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database

Procedia PDF Downloads 142
1431 Key Frame Based Video Summarization via Dependency Optimization

Authors: Janya Sainui

Abstract:

As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.

Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information

Procedia PDF Downloads 247
1430 An Experiential Learning of Ontology-Based Multi-document Summarization by Removal Summarization Techniques

Authors: Pranjali Avinash Yadav-Deshmukh

Abstract:

Remarkable development of the Internet along with the new technological innovation, such as high-speed systems and affordable large storage space have led to a tremendous increase in the amount and accessibility to digital records. For any person, studying of all these data is tremendously time intensive, so there is a great need to access effective multi-document summarization (MDS) systems, which can successfully reduce details found in several records into a short, understandable summary or conclusion. For semantic representation of textual details in ontology area, as a theoretical design, our system provides a significant structure. The stability of using the ontology in fixing multi-document summarization problems in the sector of catastrophe control is finding its recommended design. Saliency ranking is usually allocated to each phrase and phrases are rated according to the ranking, then the top rated phrases are chosen as the conclusion. With regards to the conclusion quality, wide tests on a selection of media announcements are appropriate for “Jammu Kashmir Overflow in 2014” records. Ontology centered multi-document summarization methods using “NLP centered extraction” outshine other baselines. Our participation in recommended component is to implement the details removal methods (NLP) to enhance the results.

Keywords: disaster management, extraction technique, k-means, multi-document summarization, NLP, ontology, sentence extraction

Procedia PDF Downloads 350
1429 Analyzing Semantic Feature Using Multiple Information Sources for Reviews Summarization

Authors: Yu Hung Chiang, Hei Chia Wang

Abstract:

Nowadays, tourism has become a part of life. Before reserving hotels, customers need some information, which the most important source is online reviews, about hotels to help them make decisions. Due to the dramatic growing of online reviews, it is impossible for tourists to read all reviews manually. Therefore, designing an automatic review analysis system, which summarizes reviews, is necessary for them. The main purpose of the system is to understand the opinion of reviews, which may be positive or negative. In other words, the system would analyze whether the customers who visited the hotel like it or not. Using sentiment analysis methods will help the system achieve the purpose. In sentiment analysis methods, the targets of opinion (here they are called the feature) should be recognized to clarify the polarity of the opinion because polarity of the opinion may be ambiguous. Hence, the study proposes an unsupervised method using Part-Of-Speech pattern and multi-lexicons sentiment analysis to summarize all reviews. We expect this method can help customers search what they want information as well as make decisions efficiently.

Keywords: text mining, sentiment analysis, product feature extraction, multi-lexicons

Procedia PDF Downloads 306
1428 Unsupervised Reciter Recognition Using Gaussian Mixture Models

Authors: Ahmad Alwosheel, Ahmed Alqaraawi

Abstract:

This work proposes an unsupervised text-independent probabilistic approach to recognize Quran reciter voice. It is an accurate approach that works on real time applications. This approach does not require a prior information about reciter models. It has two phases, where in the training phase the reciters' acoustical features are modeled using Gaussian Mixture Models, while in the testing phase, unlabeled reciter's acoustical features are examined among GMM models. Using this approach, a high accuracy results are achieved with efficient computation time process.

Keywords: Quran, speaker recognition, reciter recognition, Gaussian Mixture Model

Procedia PDF Downloads 352
1427 Adapted Intersection over Union: A Generalized Metric for Evaluating Unsupervised Classification Models

Authors: Prajwal Prakash Vasisht, Sharath Rajamurthy, Nishanth Dara

Abstract:

In a supervised machine learning approach, metrics such as precision, accuracy, and coverage can be calculated using ground truth labels to help in model tuning, evaluation, and selection. In an unsupervised setting, however, where the data has no ground truth, there are few interpretable metrics that can guide us to do the same. Our approach creates a framework to adapt the Intersection over Union metric, referred to as Adapted IoU, usually used to evaluate supervised learning models, into the unsupervised domain, which solves the problem by factoring in subject matter expertise and intuition about the ideal output from the model. This metric essentially provides a scale that allows us to compare the performance across numerous unsupervised models or tune hyper-parameters and compare different versions of the same model.

Keywords: general metric, unsupervised learning, classification, intersection over union

Procedia PDF Downloads 17
1426 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 405
1425 Extractive Bioconversion of Polyhydroxyalkanoates (PHAs) from Ralstonia Eutropha Via Aqueous Two-Phase System-An Integrated Approach

Authors: Y. K. Leong, J. C. W. Lan, H. S. Loh, P. L. Show

Abstract:

Being biodegradable, non-toxic, renewable and have similar or better properties as commercial plastics, polyhydroxy alkanoates (PHAs) can be a potential game changer in the polymer industry. PHAs are the biodegradable polymer produced by bacteria, which are in interest as a sustainable alternative to petrochemical-derived plastics; however, its commercial value has significantly limited by high production and recovery cost of PHA. Aqueous two-phase system (ATPS) offers different chemical and physical environments, which contains about 80-90% water delivers an excellent environment for partitioning of cells, cell organelles and biologically active substances. Extractive bioconversion via ATPS allows the integration of PHA upstream fermentation and downstream purification process, which reduces production steps and time, thus lead to cost reduction. The ability of Ralstonia eutropha to grow under different ATPS conditions was investigated for its potential to be used in a bioconversion system. Changes in tie-line length (TLL) and a volume ratio (Vr) were shown to have an effect on PHA partition coefficient. High PHA recovery yield of 65% with a relatively high purity of 73% was obtained in PEG 6000/Sodium sulphate system with 42.6 wt/wt % TLL and 1.25 Vr. Extractive bioconversion via ATPS is an attractive approach for the combination of PHA production and recovery process.

Keywords: aqueous two-phase system, extractive bioconversion, polyhydroxy alkanoates, purification

Procedia PDF Downloads 283
1424 Unsupervised Images Generation Based on Sloan Digital Sky Survey with Deep Convolutional Generative Neural Networks

Authors: Guanghua Zhang, Fubao Wang, Weijun Duan

Abstract:

Convolution neural network (CNN) has attracted more and more attention on recent years. Especially in the field of computer vision and image classification. However, unsupervised learning with CNN has received less attention than supervised learning. In this work, we use a new powerful tool which is deep convolutional generative adversarial networks (DCGANs) to generate images from Sloan Digital Sky Survey. Training by various star and galaxy images, it shows that both the generator and the discriminator are good for unsupervised learning. In this paper, we also took several experiments to choose the best value for hyper-parameters and which could help to stabilize the training process and promise a good quality of the output.

Keywords: convolution neural network, discriminator, generator, unsupervised learning

Procedia PDF Downloads 237
1423 Extraction of Text Subtitles in Multimedia Systems

Authors: Amarjit Singh

Abstract:

In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos.

Keywords: video, subtitles, extraction, annotation, frames

Procedia PDF Downloads 571
1422 A Summary-Based Text Classification Model for Graph Attention Networks

Authors: Shuo Liu

Abstract:

In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.

Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network

Procedia PDF Downloads 67