Search results for: text localization and extraction.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1437

Search results for: text localization and extraction.

1197 Knowledge Acquisition for the Construction of an Evolving Ontology: Application to Augmented Surgery

Authors: Nora Taleb, Sellami Mokhtar, Michel Simonet

Abstract:

This work concerns the evolution and the maintenance of an ontological resource in relation with the evolution of the corpus of texts from which it had been built. The knowledge forming a text corpus, especially in dynamic domains, is in continuous evolution. When a change in the corpus occurs, the domain ontology must evolve accordingly. Most methods manage ontology evolution independently from the corpus from which it is built; in addition, they treat evolution just as a process of knowledge addition, not considering other knowledge changes. We propose a methodology for managing an evolving ontology from a text corpus that evolves over time, while preserving the consistency and the persistence of this ontology. Our methodology is based on the changes made on the corpus to reflect the evolution of the considered domain - augmented surgery in our case. In this context, the results of text mining techniques, as well as the ARCHONTE method slightly modified, are used to support the evolution process.

Keywords: Corpus, Evolution, Ontology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
1196 Text Summarization for Oil and Gas Drilling Topic

Authors: Y. Y. Chen, O. M. Foong, S. P. Yong, Kurniawan Iwan

Abstract:

Information sharing and gathering are important in the rapid advancement era of technology. The existence of WWW has caused rapid growth of information explosion. Readers are overloaded with too many lengthy text documents in which they are more interested in shorter versions. Oil and gas industry could not escape from this predicament. In this paper, we develop an Automated Text Summarization System known as AutoTextSumm to extract the salient points of oil and gas drilling articles by incorporating statistical approach, keywords identification, synonym words and sentence-s position. In this study, we have conducted interviews with Petroleum Engineering experts and English Language experts to identify the list of most commonly used keywords in the oil and gas drilling domain. The system performance of AutoTextSumm is evaluated using the formulae of precision, recall and F-score. Based on the experimental results, AutoTextSumm has produced satisfactory performance with F-score of 0.81.

Keywords: Keyword's probability, synonym sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1195 Riemannian Manifolds for Brain Extraction on Multi-modal Resonance Magnetic Images

Authors: Mohamed Gouskir, Belaid Bouikhalene, Hicham Aissaoui, Benachir Elhadadi

Abstract:

In this paper, we present an application of Riemannian geometry for processing non-Euclidean image data. We consider the image as residing in a Riemannian manifold, for developing a new method to brain edge detection and brain extraction. Automating this process is a challenge due to the high diversity in appearance brain tissue, among different patients and sequences. The main contribution, in this paper, is the use of an edge-based anisotropic diffusion tensor for the segmentation task by integrating both image edge geometry and Riemannian manifold (geodesic, metric tensor) to regularize the convergence contour and extract complex anatomical structures. We check the accuracy of the segmentation results on simulated brain MRI scans of single T1-weighted, T2-weighted and Proton Density sequences. We validate our approach using two different databases: BrainWeb database, and MRI Multiple sclerosis Database (MRI MS DB). We have compared, qualitatively and quantitatively, our approach with the well-known brain extraction algorithms. We show that using a Riemannian manifolds to medical image analysis improves the efficient results to brain extraction, in real time, outperforming the results of the standard techniques.

Keywords: Riemannian manifolds, Riemannian Tensor, Brain Segmentation, Non-Euclidean data, Brain Extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
1194 The Main Principles of Text-to-Speech Synthesis System

Authors: K.R. Aida–Zade, C. Ardil, A.M. Sharifova

Abstract:

In this paper, the main principles of text-to-speech synthesis system are presented. Associated problems which arise when developing speech synthesis system are described. Used approaches and their application in the speech synthesis systems for Azerbaijani language are shown.

Keywords: synthesis of Azerbaijani language, morphemes, phonemes, sounds, sentence, speech synthesizer, intonation, accent, pronunciation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5605
1193 Recursive Algorithms for Image Segmentation Based on a Discriminant Criterion

Authors: Bing-Fei Wu, Yen-Lin Chen, Chung-Cheng Chiu

Abstract:

In this study, a new criterion for determining the number of classes an image should be segmented is proposed. This criterion is based on discriminant analysis for measuring the separability among the segmented classes of pixels. Based on the new discriminant criterion, two algorithms for recursively segmenting the image into determined number of classes are proposed. The proposed methods can automatically and correctly segment objects with various illuminations into separated images for further processing. Experiments on the extraction of text strings from complex document images demonstrate the effectiveness of the proposed methods.1

Keywords: image segmentation, multilevel thresholding, clustering, discriminant analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1978
1192 Distribution of Phospholipids, Cholesterol and Carotenoids in Two-Solvent System during Egg Yolk Oil Solvent Extraction

Authors: Aleksandrs Kovalcuks, Mara Duma

Abstract:

Egg yolk oil is a concentrated source of egg bioactive compounds, such as fat-soluble vitamins, phospholipids, cholesterol, carotenoids and others. To extract lipids and other fat-soluble nutrients from liquid egg yolk, a two-step extraction process involving polar (ethanol) and non-polar (hexane) solvents were used. This extraction technique was based on egg yolk bioactive compounds polarities, where non-polar compound was extracted into non-polar hexane, but polar in to polar alcohol/water phase. But many egg yolk bioactive compounds are not strongly polar or non-polar. Egg yolk phospholipids, cholesterol and pigments are amphipatic (have both polar and non-polar regions) and their behavior in ethanol/hexane solvent system is not clear. The aim of this study was to clarify the behavior of phospholipids, cholesterol and carotenoids during extraction of egg yolk oil with ethanol and hexane and determine the loss of these compounds in egg yolk oil. Egg yolks and egg yolk oil were analyzed for phospholipids (phosphatidylcholine (PC) and phosphatidylethanolamine (PE)), cholesterol and carotenoids (lutein, zeaxanthin, canthaxanthin and β-carotene) content using GC-FID and HPLC methods. PC and PE are polar lipids and were extracted into polar ethanol phase. Concentration of PC in ethanol was 97.89% and PE 99.81% from total egg yolk phospholipids. Due to cholesterol’s partial extraction into ethanol, cholesterol content in egg yolk oil was reduced in comparison to its total content presented in egg yolk lipids. The highest amount of lutein and zeaxanthin was concentrated in ethanol extract. The opposite situation was observed with canthaxanthin and β-carotene, which became the main pigments of egg yolk oil.

Keywords: Cholesterol, egg yolk oil, lutein, phospholipids, solvent extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1775
1191 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology

Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi

Abstract:

The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.

Keywords: Emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1177
1190 Classifying Biomedical Text Abstracts based on Hierarchical 'Concept' Structure

Authors: Rozilawati Binti Dollah, Masaki Aono

Abstract:

Classifying biomedical literature is a difficult and challenging task, especially when a large number of biomedical articles should be organized into a hierarchical structure. In this paper, we present an approach for classifying a collection of biomedical text abstracts downloaded from Medline database with the help of ontology alignment. To accomplish our goal, we construct two types of hierarchies, the OHSUMED disease hierarchy and the Medline abstract disease hierarchies from the OHSUMED dataset and the Medline abstracts, respectively. Then, we enrich the OHSUMED disease hierarchy before adapting it to ontology alignment process for finding probable concepts or categories. Subsequently, we compute the cosine similarity between the vector in probable concepts (in the “enriched" OHSUMED disease hierarchy) and the vector in Medline abstract disease hierarchies. Finally, we assign category to the new Medline abstracts based on the similarity score. The results obtained from the experiments show the performance of our proposed approach for hierarchical classification is slightly better than the performance of the multi-class flat classification.

Keywords: Biomedical literature, hierarchical text classification, ontology alignment, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
1189 Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery

Authors: Evans Belly, Imdad Rizvi, M. M. Kadam

Abstract:

Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.

Keywords: Building detection, shadow detection, landscape generation, label, partitioning, very high resolution satellite imagery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 767
1188 Caffeine Content Investigation in the Turkish Black Teas

Authors: E. Moroydor Derun, A. S. Kipcak, O. Dere Ozdemir, F. Demir, M. Karakoc, S. Piskin

Abstract:

Tea is a widely consumed beverage that contains many components. Caffeine belongs to this group of components called alkaloids contain nitrogen. In this study caffeine contents of three types of Turkish teas are determined by using extraction method. After condensation process, residue of caffeine and oil are obtained with evaporation. The oil which is in the residue is removed by hot water. Extraction process performed by using chloroform and the crude caffeine is obtained. From the results of experiments, caffeine contents are found in black tea, green tea and earl grey tea as 3.57±0.43%, 3.11±0.02%, 4.29±0.27%, respectively. Caffeine contents which are found in 1, 5 and 10 cups of tea are calculated. Furthermore, the daily intake of caffeine from black teas that affects human health is investigated.

Keywords: Caffeine, extraction, tea, health.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8522
1187 Comparison of Microwave-Assisted and Conventional Leaching for Extraction of Copper from Chalcopyrite Concentrate

Authors: Ayfer Kilicarslan, Kubra Onol, Sercan Basit, Muhlis Nezihi Saridede

Abstract:

Chalcopyrite (CuFeS2) is the most common primary mineral used for the commercial production of copper. The low dissolution efficiency of chalcopyrite in sulfate media has prevented an efficient industrial leaching of this mineral in sulfate media. Ferric ions, bacteria, oxygen and other oxidants have been used as oxidizing agents in the leaching of chalcopyrite in sulfate and chloride media under atmospheric or pressure leaching conditions. Two leaching methods were studied to evaluate chalcopyrite (CuFeS2) dissolution in acid media. First, the conventional oxidative acid leaching method was carried out using sulfuric acid (H2SO4) and potassium dichromate (K2Cr2O7) as oxidant at atmospheric pressure. Second, microwave-assisted acid leaching was performed using the microwave accelerated reaction system (MARS) for same reaction media. Parameters affecting the copper extraction such as leaching time, leaching temperature, concentration of H2SO4 and concentration of K2Cr2O7 were investigated. The results of conventional acid leaching experiments were compared to the microwave leaching method. It was found that the copper extraction obtained under high temperature and high concentrations of oxidant with microwave leaching is higher than those obtained conventionally. 81% copper extraction was obtained by the conventional oxidative acid leaching method in 180 min, with the concentration of 0.3 mol/L K2Cr2O7 in 0.5M H2SO4 at 50 ºC, while 93.5% copper extraction was obtained in 60 min with microwave leaching method under same conditions.

Keywords: Extraction, copper, microwave-assisted leaching, chalcopyrite, potassium dichromate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2804
1186 Wasting Human and Computer Resources

Authors: Mária Csernoch, Piroska Biró

Abstract:

The legends about “user-friendly” and “easy-to-use” birotical tools (computer-related office tools) have been spreading and misleading end-users. This approach has led us to the extremely high number of incorrect documents, causing serious financial losses in the creating, modifying, and retrieving processes. Our research proved that there are at least two sources of this underachievement: (1) The lack of the definition of the correctly edited, formatted documents. Consequently, end-users do not know whether their methods and results are correct or not. They are not aware of their ignorance. They are so ignorant that their ignorance does not allow them to realize their lack of knowledge. (2) The end-users’ problem solving methods. We have found that in non-traditional programming environments end-users apply, almost exclusively, surface approach metacognitive methods to carry out their computer related activities, which are proved less effective than deep approach methods. Based on these findings we have developed deep approach methods which are based on and adapted from traditional programming languages. In this study, we focus on the most popular type of birotical documents, the text based documents. We have provided the definition of the correctly edited text, and based on this definition, adapted the debugging method known in programming. According to the method, before the realization of text editing, a thorough debugging of already existing texts and the categorization of errors are carried out. With this method in advance to real text editing users learn the requirements of text based documents and also of the correctly formatted text. The method has been proved much more effective than the previously applied surface approach methods. The advantages of the method are that the real text handling requires much less human and computer sources than clicking aimlessly in the GUI (Graphical User Interface), and the data retrieval is much more effective than from error-prone documents.

Keywords: Deep approach metacognitive methods, error-prone birotical documents, financial losses, human and computer resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
1185 Volatility of Cu, Ni, Cr, Co, Pb, and As in Fluidised-Bed Combustion Chamber in Relation to Their Modes of Occurrence in Coal

Authors: L. Bartoňová, Z. Klika

Abstract:

Modes of occurrence of Pb, As, Cr, Co, Cu, and Ni in bituminous coal and lignite were determined by means of sequential extraction using NH4OAc, HCl, HF and HNO3 extraction solutions. Elemental affinities obtained were then evaluated in relation to volatility of these elements during the combustion of these coals in two circulating fluidised-bed power stations. It was found out that higher percentage of the elements bound in silicates brought about lower volatility, while higher elemental proportion with monosulphides association (or bound as exchangeable ion) resulted in higher volatility. The only exception was the behavior of arsenic, whose volatility depended on amount of limestone added during the combustion process (as desulphurisation additive) rather than to its association in coal.

Keywords: Coal combustion, sequential extraction, trace elements, volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
1184 Automatic Text Summarization

Authors: Mohamed Abdel Fattah, Fuji Ren

Abstract:

This work proposes an approach to address automatic text summarization. This approach is a trainable summarizer, which takes into account several features, including sentence position, positive keyword, negative keyword, sentence centrality, sentence resemblance to the title, sentence inclusion of name entity, sentence inclusion of numerical data, sentence relative length, Bushy path of the sentence and aggregated similarity for each sentence to generate summaries. First we investigate the effect of each sentence feature on the summarization task. Then we use all features score function to train genetic algorithm (GA) and mathematical regression (MR) models to obtain a suitable combination of feature weights. The proposed approach performance is measured at several compression rates on a data corpus composed of 100 English religious articles. The results of the proposed approach are promising.

Keywords: Automatic Summarization, Genetic Algorithm, Mathematical Regression, Text Features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278
1183 A Hybrid Method for Eyes Detection in Facial Images

Authors: Muhammad Shafi, Paul W. H. Chung

Abstract:

This paper proposes a hybrid method for eyes localization in facial images. The novelty is in combining techniques that utilise colour, edge and illumination cues to improve accuracy. The method is based on the observation that eye regions have dark colour, high density of edges and low illumination as compared to other parts of face. The first step in the method is to extract connected regions from facial images using colour, edge density and illumination cues separately. Some of the regions are then removed by applying rules that are based on the general geometry and shape of eyes. The remaining connected regions obtained through these three cues are then combined in a systematic way to enhance the identification of the candidate regions for the eyes. The geometry and shape based rules are then applied again to further remove the false eye regions. The proposed method was tested using images from the PICS facial images database. The proposed method has 93.7% and 87% accuracies for initial blobs extraction and final eye detection respectively.

Keywords: Erosion, dilation, Edge-density

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2000
1182 Release of Elements in Bottom Ash and Fly Ash from Incineration of Peat- and Wood-Residues using a Sequential Extraction Procedure

Authors: Risto Poykio, Kati Manskinen, Olli Dahl, Mikko Mäkelä, Hannu Nurmesniemi

Abstract:

When the results of the total element concentrations using USEPA method 3051A are compared to the sequential extraction analyses (i.e. the sum of fractions BCR1, BCR2 and BRC3), it can be calculated that the recovery values of elements varied between 56.8-% and 69.4-% in the bottom ash, and between 11.3-% and 70.9-% in the fly ash. This indicates that most of the elements in the ashes do not occur as readily soluble forms.

Keywords: Ash, BCR, leaching, solubility, waste

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
1181 Production and Extraction of Quercetin and (+)-Catechin from Phyllanthus niruri Callus Culture

Authors: Anuar, N., Markom, M., Khairedin, S., Johari, N. A.

Abstract:

Quercetin and (+)-catechin are metabolites present in Phyllanthus niruri plant, have potential in medicinal uses as anticancer and antioxidant agents. Studies on production of quercetin and (+)-catechin from P. niruri callus culture via in vitro technique were carried out and the results were compared to the intact plant. P. niruri explants were cultured on Murashige and Skoog (MS) solidified media supplemented with several phytohormone combinations for one month. The metabolites were extracted from P. niruri callus and intact plant by using carbon dioxide supercritical fluid extraction (SFE) with ethanol as modifier and solvent extraction techniques. The extracts were analyzed by means of HPLC method. Results showed that P. niruri callus culture was successfully established. The highest content of quercetin (1.72%) was found from P. niruri callus grown in media supplemented with 0.8mg/L kinetin and 0.2mg/L 2,4-dicholophenoxyacetic acid (2,4-D), which was 1.2 fold higher than intact plant. Meanwhile, the highest amounts of (+)-catechin (0.63%) was found from P. niruri callus grown in media with addition of 0.2mg/L 1-naphthalene acetic acid (NAA) and 0.8mg/L 2,4-D. The SFE condition in this study showed better extraction efficiency when higher contents of selected metabolites were found in all SFE extracts compared to the common solvent extracts.

Keywords: Callus culture, Phyllanthus niruri, secondary metabolite, supercritical fluid extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3866
1180 Characterization for Post-treatment Effect of Bagasse Ash for Silica Extraction

Authors: Patcharin Worathanakul, Wisaroot Payubnop, Akhapon Muangpet

Abstract:

Utilization of bagasse ash for silica sources is one of the most common application for agricultural wastes and valuable biomass byproducts in sugar milling. The high percentage silica content from bagasse ash was used as silica source for sodium silicate solution. Different heating temperature, time and acid treatment were studies for silica extraction. The silica was characterized using various techniques including X-ray fluorescence, X-ray diffraction, Scanning electron microscopy, and Fourier Transform Infrared Spectroscopy method,. The synthesis conditions were optimized to obtain the bagasse ash with the maximum silica content. The silica content of 91.57 percent was achieved from heating of bagasse ash at 600°C for 3 hours under oxygen feeding and HCl treatment. The result can be used as value added for bagasse ash utilization and minimize the environmental impact of disposal problems.

Keywords: Bagasse ash, synthesis, silica, extraction, posttreatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3775
1179 An Enhanced Floor Estimation Algorithm for Indoor Wireless Localization Systems Using Confidence Interval Approach

Authors: Kriangkrai Maneerat, Chutima Prommak

Abstract:

Indoor wireless localization systems have played an important role to enhance context-aware services. Determining the position of mobile objects in complex indoor environments, such as those in multi-floor buildings, is very challenging problems. This paper presents an effective floor estimation algorithm, which can accurately determine the floor where mobile objects located. The proposed algorithm is based on the confidence interval of the summation of online Received Signal Strength (RSS) obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSN).We compare the performance of the proposed algorithm with those of other floor estimation algorithms in literature by conducting a real implementation of WSN in our facility. The experimental results and analysis showed that the proposed floor estimation algorithm outperformed the other algorithms and provided highest percentage of floor accuracy up to 100% with 95-percent confidence interval.

Keywords: Floor estimation algorithm, floor determination, multi-floor building, indoor wireless systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3163
1178 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy

Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki

Abstract:

Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching  (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.

Keywords: Biomass upgrading, liquid membrane-Fourier transform infrared spectroscopy, FTIR, degradative solvent extraction, mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
1177 Examining the Value of Attribute Scores for Author-Supplied Keyphrases in Automatic Keyphrase Extraction

Authors: Vicky Min-How Lim, Siew Fan Wong, Tong Ming Lim

Abstract:

Automatic keyphrase extraction is useful in efficiently locating specific documents in online databases. While several techniques have been introduced over the years, improvement on accuracy rate is minimal. This research examines attribute scores for author-supplied keyphrases to better understand how the scores affect the accuracy rate of automatic keyphrase extraction. Five attributes are chosen for examination: Term Frequency, First Occurrence, Last Occurrence, Phrase Position in Sentences, and Term Cohesion Degree. The results show that First Occurrence is the most reliable attribute. Term Frequency, Last Occurrence and Term Cohesion Degree display a wide range of variation but are still usable with suggested tweaks. Only Phrase Position in Sentences shows a totally unpredictable pattern. The results imply that the commonly used ranking approach which directly extracts top ranked potential phrases from candidate keyphrase list as the keyphrases may not be reliable.

Keywords: Accuracy, Attribute Score, Author-supplied keyphrases, Automatic keyphrase extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
1176 Extraction in Two-Phase Systems and Some Properties of Laccase from Lentinus polychrous

Authors: K. Ratanapongleka, J. Phetsom

Abstract:

Extraction of laccase produced by L. polychrous in an aqueous two-phase system, composed of polyethylene glycol and phosphate salt at pH 7.0 and 250C was investigated. The effect of PEG molecular weight, PEG concentration and phosphate concentration was determined. Laccase preferentially partitioned to the top phase. Good extraction of laccase to the top phase was observed with PEG 4000. The optimum system was found in the system containing 12% w/w PEG 4000 and 16% w/w phosphate salt with KE of 88.3, purification factor of 3.0-fold and 99.1% yield. Some properties of the enzyme such as thermal stability, effect of heavy metal ions and kinetic constants were also presented in this work. The thermal stability decreased sharply with high temperature above 60 0C. The enzyme was inhibited by Cd2+, Pb2+, Zn2+ and Cu2+. The Vmax and Km values of the enzyme were 74.70 μmol/min/ml and 9.066 mM respectively.

Keywords: Aqueous Two Phase System, Laccase, Lentinuspolychrous,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1861
1175 Thematic Role Extraction Using Shallow Parsing

Authors: Mehrnoush Shamsfard, Maryam Sadr Mousavi

Abstract:

Extracting thematic (semantic) roles is one of the major steps in representing text meaning. It refers to finding the semantic relations between a predicate and syntactic constituents in a sentence. In this paper we present a rule-based approach to extract semantic roles from Persian sentences. The system exploits a twophase architecture to (1) identify the arguments and (2) label them for each predicate. For the first phase we developed a rule based shallow parser to chunk Persian sentences and for the second phase we developed a knowledge-based system to assign 16 selected thematic roles to the chunks. The experimental results of testing each phase are shown at the end of the paper.

Keywords: Natural Language Processing, Semantic RoleLabeling, Shallow parsing, Thematic Roles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1983
1174 Automatic Number Plate Recognition System Based on Deep Learning

Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi

Abstract:

In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.

Keywords: Automatic number plate recognition, character segmentation, convolutional neural network, CNN, deep learning, number plate localization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
1173 Speciation, Preconcentration, and Determination of Iron(II) and (III) Using 1,10-Phenanthroline Immobilized on Alumina-Coated Magnetite Nanoparticles as a Solid Phase Extraction Sorbent in Pharmaceutical Products

Authors: Hossein Tavallali, Mohammad Ali Karimi, Gohar Deilamy-Rad

Abstract:

The proposed method for speciation, preconcentration and determination of Fe(II) and Fe(III) in pharmaceutical products was developed using of alumina-coated magnetite nanoparticles (Fe3O4/Al2O3 NPs) as solid phase extraction (SPE) sorbent in magnetic mixed hemimicell solid phase extraction (MMHSPE) technique followed by flame atomic absorption spectrometry analysis. The procedure is based on complexation of Fe(II) with 1, 10-phenanthroline (OP) as complexing reagent for Fe(II) that immobilized on the modified Fe3O4/Al2O3 NPs. The extraction and concentration process for pharmaceutical sample was carried out in a single step by mixing the extraction solvent, magnetic adsorbents under ultrasonic action. Then, the adsorbents were isolated from the complicated matrix easily with an external magnetic field. Fe(III) ions determined after facility reduced to Fe(II) by added a proper reduction agent to sample solutions. Compared with traditional methods, the MMHSPE method simplified the operation procedure and reduced the analysis time. Various influencing parameters on the speciation and preconcentration of trace iron, such as pH, sample volume, amount of sorbent, type and concentration of eluent, were studied. Under the optimized operating conditions, the preconcentration factor of the modified nano magnetite for Fe(II) 167 sample was obtained. The detection limits and linear range of this method for iron were 1.0 and 9.0 - 175 ng.mL−1, respectively. Also the relative standard deviation for five replicate determinations of 30.00 ng.mL-1 Fe2+ was 2.3%.

Keywords: Alumina-coated magnetite nanoparticles, magnetic mixed hemimicell solid-phase extraction, Fe(ΙΙ) and Fe(ΙΙΙ), pharmaceutical sample.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1159
1172 Text Summarization for Oil and Gas News Article

Authors: L. H. Chong, Y. Y. Chen

Abstract:

Information is increasing in volumes; companies are overloaded with information that they may lose track in getting the intended information. It is a time consuming task to scan through each of the lengthy document. A shorter version of the document which contains only the gist information is more favourable for most information seekers. Therefore, in this paper, we implement a text summarization system to produce a summary that contains gist information of oil and gas news articles. The summarization is intended to provide important information for oil and gas companies to monitor their competitor-s behaviour in enhancing them in formulating business strategies. The system integrated statistical approach with three underlying concepts: keyword occurrences, title of the news article and location of the sentence. The generated summaries were compared with human generated summaries from an oil and gas company. Precision and recall ratio are used to evaluate the accuracy of the generated summary. Based on the experimental results, the system is able to produce an effective summary with the average recall value of 83% at the compression rate of 25%.

Keywords: Information retrieval, text summarization, statistical approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1559
1171 Synthesis of Unconventional Materials Using Chitosan and Crown Ether for Selective Removal of Precious Metal Ions

Authors: Rabindra Prasad Dhakal, Tatsuya Oshima, Yoshinari Baba

Abstract:

The polyfunctional and highly reactive bio-polymer, the chitosan was first regioselectively converted into dialkylated chitosan using dimsyl anionic solution(NaH in DMSO) and bromodecane after protecting amino groups by phthalic anhydride. The dibenzo-18-crown-6-ether, on the other hand, was converted into its carbonyl derivatives via Duff reaction prior to incorporate into chitosan by Schiff base formation. Thus formed diformylated dibenzo-18-crown-6-ether was condensed with lipophilic chitosan to prepare the novel solvent extraction reagent. The products were characterized mainly by IR and 1H-NMR. Hence, the multidentate crown ether-embedded polyfunctional bio-material was tested for extraction of Pd(II) and Pt(IV) in aqueous solution.

Keywords: Lipophilic chitosan, Duff reaction, crown ether and precious metal ions extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1840
1170 How Does Psychoanalysis Help in Reconstructing Political Thought? An Exercise of Interpretation

Authors: Subramaniam Chandran

Abstract:

The significance of psychology in studying politics is embedded in philosophical issues as well as behavioural pursuits. For the former is often associated with Sigmund Freud and his followers. The latter is inspired by the writings of Harold Lasswell. Political psychology or psychopolitics has its own impression on political thought ever since it deciphers the concept of human nature and political propaganda. More importantly, psychoanalysis views political thought as a textual content which needs to explore the latent from the manifest content. In other words, it reads the text symptomatically and interprets the hidden truth. This paper explains the paradigm of dream interpretation applied by Freud. The dream work is a process which has four successive activities: condensation, displacement, representation and secondary revision. The texts dealing with political though can also be interpreted on these principles. Freud's method of dream interpretation draws its source after the hermeneutic model of philological research. It provides theoretical perspective and technical rules for the interpretation of symbolic structures. The task of interpretation remains a discovery of equivalence of symbols and actions through perpetual analogies. Psychoanalysis can help in studying political thought in two ways: to study the text distortion, Freud's dream interpretation is used as a paradigm exploring the latent text from its manifest text; and to apply Freud's psychoanalytic concepts and theories ranging from individual mind to civilization, religion, war and politics.

Keywords: Psychoanalysis, political thought, dreaminterpretation, latent content, manifest content

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1517
1169 Two Dimensionnal Model for Extraction Packed Column Simulation using Finite Element Method

Authors: N. Outili, A-H. Meniai

Abstract:

Modeling transfer phenomena in several chemical engineering operations leads to the resolution of partial differential equations systems. According to the complexity of the operations mechanisms, the equations present a nonlinear form and analytical solution became difficult, we have then to use numerical methods which are based on approximations in order to transform a differential system to an algebraic one.Finite element method is one of numerical methods which can be used to obtain an accurate solution in many complex cases of chemical engineering.The packed columns find a large application like contactor for liquid-liquid systems such solvent extraction. In the literature, the modeling of this type of equipment received less attention in comparison with the plate columns.A mathematical bidimensionnal model with radial and axial dispersion, simulating packed tower extraction behavior was developed and a partial differential equation was solved using the finite element method by adopting the Galerkine model. We developed a Mathcad program, which can be used for a similar equations and concentration profiles are obtained along the column. The influence of radial dispersion was prooved and it can-t be neglected, the results were compared with experimental concentration at the top of the column in the extraction system: acetone/toluene/water.

Keywords: finite element method, Galerkine method, liquidliquid extraction modelling, packed column simulation, two dimensional model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1657
1168 SMaTTS: Standard Malay Text to Speech System

Authors: Othman O. Khalifa, Zakiah Hanim Ahmad, Teddy Surya Gunawan

Abstract:

This paper presents a rule-based text- to- speech (TTS) Synthesis System for Standard Malay, namely SMaTTS. The proposed system using sinusoidal method and some pre- recorded wave files in generating speech for the system. The use of phone database significantly decreases the amount of computer memory space used, thus making the system very light and embeddable. The overall system was comprised of two phases the Natural Language Processing (NLP) that consisted of the high-level processing of text analysis, phonetic analysis, text normalization and morphophonemic module. The module was designed specially for SM to overcome few problems in defining the rules for SM orthography system before it can be passed to the DSP module. The second phase is the Digital Signal Processing (DSP) which operated on the low-level process of the speech waveform generation. A developed an intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly Graphical User Interface (GUI) is introduced. A Standard Malay Language (SM) phoneme set and an inclusive set of phone database have been constructed carefully for this phone-based speech synthesizer. By applying the generative phonology, a comprehensive letter-to-sound (LTS) rules and a pronunciation lexicon have been invented for SMaTTS. As for the evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was compiled and several experiments have been performed to evaluate the quality of the synthesized speech by analyzing the Mean Opinion Score (MOS) obtained. The overall performance of the system as well as the room for improvements was thoroughly discussed.

Keywords: Natural Language Processing, Text-To-Speech (TTS), Diphone, source filter, low-/ high- level synthesis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931