Search results for: open information extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14461

Search results for: open information extraction

14371 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 174
14370 Effect of Electromagnetic Fields on Protein Extraction from Shrimp By-Products for Electrospinning Process

Authors: Guido Trautmann-Sáez, Mario Pérez-Won, Vilbett Briones, María José Bugueño, Gipsy Tabilo-Munizaga, Luis Gonzáles-Cavieres

Abstract:

Shrimp by-products are a valuable source of protein. However, traditional protein extraction methods have limitations in terms of their efficiency. Protein extraction from shrimp (Pleuroncodes monodon) industrial by-products assisted with ohmic heating (OH), microwave (MW) and pulsed electric field (PEF). It was performed by chemical method (using NaOH and HCl 2M) assisted with OH, MW and PEF in a continuous flow system (5 ml/s). Protein determination, differential scanning calorimetry (DSC) and Fourier-transform infrared (FTIR). Results indicate a 19.25% (PEF) 3.65% (OH) and 28.19% (MW) improvement in protein extraction efficiency. The most efficient method was selected for the electrospinning process and obtaining fiber.

Keywords: electrospinning process, emerging technology, protein extraction, shrimp by-products

Procedia PDF Downloads 47
14369 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model

Procedia PDF Downloads 122
14368 Optimization of Synergism Extraction of Toxic Metals (Lead, Copper) from Chlorides Solutions with Mixture of Cationic and Solvating Extractants

Authors: F. Hassaine-Sadi, S. Chelouaou

Abstract:

In recent years, environmental contamination by toxic metals such as Pb, Cu, Ni, Zn ... has become a worldwide crucial problem, particularly in some areas where the population depends on groundwater for drinking daily consumption. Thus, the sources of metal ions come from the metal manufacturing industry, fertilizers, batteries, paints, pigments and so on. Solvent extraction of metal ions has given an important role in the development of metal purification processes such as the synergistic extraction of some divalent cations metals ( M²⁺), the ions metals from various sources. This work consists of a water purification technique that involves the lead and copper systems: Pb²⁺, H₃O+, Cl⁻ and Cu²⁺, H₃O⁺, Cl⁻ for diluted solutions by a mixture of tri-n-octylphosphine oxide (TOPO) or Tri-n-butylphosphate(TBP) and di (2-ethyl hexyl) phosphoric acid (HDEHP) dissolved in kerosene. The study of the fundamental parameters influencing the extraction synergism: cation exchange/extraction solvent have been examined.

Keywords: synergistic extraction, lead, copper, environment

Procedia PDF Downloads 418
14367 First Approach on Lycopene Extraction Using Limonene

Authors: M. A. Ferhat, M. N. Boukhatem, F. Chemat

Abstract:

Lycopene extraction with petroleum derivatives as solvents has caused safety, health, and environmental concerns everywhere. Thus, finding a safe alternative solvent will have a strong and positive impact on environments and general health of the world population. d-limonene from the orange peel was extracted through a steam distillation procedure followed by a deterpenation process and combining this achievement by using it as a solvent for extracting lycopene from tomato fruit as a substitute of dichloromethane. Lycopene content of fresh tomatoes was determined by high-performance liquid chromatography after extraction. Yields obtained for both extractions showed that yields of d-limonene’s extracts were almost equivalent to those obtained using dichloromethane. The proposed approach using a green solvent to perform extraction is useful and can be considered as a nice alternative to conventional petroleum solvent where toxicity for both operator and environment is reduced.

Keywords: alternative solvent, d-limonene, extraction, lycopene

Procedia PDF Downloads 384
14366 Implementing a Database from a Requirement Specification

Authors: M. Omer, D. Wilson

Abstract:

Creating a database scheme is essentially a manual process. From a requirement specification, the information contained within has to be analyzed and reduced into a set of tables, attributes and relationships. This is a time-consuming process that has to go through several stages before an acceptable database schema is achieved. The purpose of this paper is to implement a Natural Language Processing (NLP) based tool to produce a from a requirement specification. The Stanford CoreNLP version 3.3.1 and the Java programming were used to implement the proposed model. The outcome of this study indicates that the first draft of a relational database schema can be extracted from a requirement specification by using NLP tools and techniques with minimum user intervention. Therefore, this method is a step forward in finding a solution that requires little or no user intervention.

Keywords: information extraction, natural language processing, relation extraction

Procedia PDF Downloads 235
14365 Extraction of Natural Colorant from the Flowers of Flame of Forest Using Ultrasound

Authors: Sunny Arora, Meghal A. Desai

Abstract:

An impetus towards green consumerism and implementation of sustainable techniques, consumption of natural products and utilization of environment friendly techniques have gained accelerated acceptance. Butein, a natural colorant, has many medicinal properties apart from its use in dyeing industries. Extraction of butein from the flowers of flame of forest was carried out using ultrasonication bath. Solid loading (2-6 g), extraction time (30-50 min), volume of solvent (30-50 mL) and types of solvent (methanol, ethanol and water) have been studied to maximize the yield of butein using the Taguchi method. The highest yield of butein 4.67% (w/w) was obtained using 4 g of plant material, 40 min of extraction time and 30 mL volume of methanol as a solvent. The present method provided a greater reduction in extraction time compared to the conventional method of extraction. Hence, the outcome of the present investigation could further be utilized to develop the method at a higher scale.

Keywords: butein, flowers of Flame of the Forest, Taguchi method, ultrasonic bath

Procedia PDF Downloads 443
14364 Impact of Schools' Open and Semi-Open Spaces on Student's Studying Behavior

Authors: Chaithanya Pothuganti

Abstract:

Open and semi-open spaces in educational buildings like corridors, mid landings, seating spaces, lobby, courtyards are traditionally have been the places of social communion and interaction which helps in promoting the knowledge, performance, activeness, and motivation in students. Factors like availability of land, commercialization, of educational facilities, especially in e-techno and smart schools, led to closed classrooms to accommodate students thereby lack quality open and semi-open spaces. This insufficient attention towards open space design which is a means of informal learning misses an opportunity to encourage the student’s skill development, behavior and learning skills. The core objective of this paper is to find the level of impact on student learning behavior and to identify the suitable proportions and configuration of spaces that shape the schools. In order to achieve this, different types of open spaces in schools and their impact on student’s performance in various existing models are analysed using case studies to draw some design principles. The study is limited to indoor open spaces like corridors, break out spaces and courtyards. The expected outcome of the paper is to suggest better design considerations for the development of semi-open and open spaces which functions as an element for informal learnings. Its focus is to provide further thinking on designing and development of open spaces in educational buildings.

Keywords: configuration of spaces and proportions, informal learning, open spaces, schools, student’s behavior

Procedia PDF Downloads 284
14363 Microwave and Ultrasound Assisted Extraction of Pectin from Mandarin and Lemon Peel: Comparisons between Sources and Methods

Authors: Pınar Karbuz, A. Seyhun Kıpcak, Mehmet B. Piskin, Emek Derun, Nurcan Tugrul

Abstract:

Pectin is a complex colloidal polysaccharide, found on the cell walls of all young plants such as fruit and vegetables. It acts as a thickening, stabilizing and gelling agent in foods. Pectin was extracted from mandarin and lemon peels using ultrasound and microwave assisted extraction methods to compare with these two different sources and methods of pectin production. In this work, the effect of microwave power (360, 600 W) and irradiation time (1, 2, 3 min) on the yield of extracted pectin from mandarin and lemon peels for microwave assisted extraction (MAE) were investigated. For ultrasound assisted extraction (UAE), parameters were determined as temperature (60, 75 °C) and sonication time (15, 30, 45 min) and hydrochloric acid (HCl) was used as an extracting agent for both extraction methods. The highest yields of extracted pectin from lemon peels were found to be 8.16 % (w/w) for 75 °C, 45 min by UAE and 8.58 % (w/w) for 360 W, 1 min by MAE. Additionally, the highest yields of extracted pectin from mandarin peels were found to be 11.29 % (w/w) for 75 °C, 45 min by UAE and 16.44 % (w/w) for 600 W, 1 min by MAE. The results showed that the use of microwave assisted extraction promoted a better yield when compared to the two extraction methods. On the other hand, according to the results of experiments, mandarin peels contain more pectin than lemon peels when the compared to the pectin product values of two sources. Therefore, these results suggested that MAE could be used as an efficient and rapid method for extraction of pectin and mandarin peels should be preferred as sources of pectin production compared to lemon peels.

Keywords: mandarin peel, lemon peel, pectin, ultrasound, microwave, extraction

Procedia PDF Downloads 216
14362 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 228
14361 Information Extraction for Short-Answer Question for the University of the Cordilleras

Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo

Abstract:

Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.

Keywords: information extraction, short-answer question, natural language processing, application

Procedia PDF Downloads 407
14360 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 105
14359 Physical Parameters Influencing the Yield of Nigella Sativa Oil Extracted by Hydraulic Pressing

Authors: Hadjadj Naima, K. Mahdi, D. Belhachat, F. S. Ait Chaouche, A. Ferradji

Abstract:

The Nigella Sativa oil yield extracted by hydraulic pressing is influenced by the pressure temperature and size particles. The optimization of oil extraction is investigated. The rate of extraction of the whole seeds is very weak, a crushing of seeds is necessary to facilitate the extraction. This rate augments with the rise of the temperature and the pressure, and decrease of size particles. The best output (66%) is obtained for a granulometry lower than 1mm, a temperature of 50°C and a pressure of 120 bars.

Keywords: oil, Nigella sativa, extraction, optimization, temperature, pressure

Procedia PDF Downloads 452
14358 A Unique Exact Approach to Handle a Time-Delayed State-Space System: The Extraction of Juice Process

Authors: Mohamed T. Faheem Saidahmed, Ahmed M. Attiya Ibrahim, Basma GH. Elkilany

Abstract:

This paper discusses the application of Time Delay Control (TDC) compensation technique in the juice extraction process in a sugar mill. The objective is to improve the control performance of the process and increase extraction efficiency. The paper presents the mathematical model of the juice extraction process and the design of the TDC compensation controller. Simulation results show that the TDC compensation technique can effectively suppress the time delay effect in the process and improve control performance. The extraction efficiency is also significantly increased with the application of the TDC compensation technique. The proposed approach provides a practical solution for improving the juice extraction process in sugar mills using MATLAB Processes.

Keywords: time delay control (TDC), exact and unique state space model, delay compensation, Smith predictor.

Procedia PDF Downloads 56
14357 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 125
14356 Open educational Resources' Metadata: Towards the First Star to Quality of Open Educational Resources

Authors: Audrey Romero-Pelaez, Juan Carlos Morocho-Yunga

Abstract:

The increasing amount of open educational resources (OER) published on the web for consumption in teaching and learning environments also generates a growing need to ensure the quality of these resources. The low level of OER discovery is one of the most significant drawbacks when faced with its reuse, and as a consequence, high-quality educational resources can go unnoticed. Metadata enables the discovery of resources on the web. The purpose of this study is to lay the foundations for open educational resources to achieve their first quality star within the Quality4OER Framework. In this study, we evaluate the quality of OER metadata and establish the main guidelines on metadata quality in this context.

Keywords: open educational resources, OER quality, quality metadata

Procedia PDF Downloads 210
14355 Tool for Metadata Extraction and Content Packaging as Endorsed in OAIS Framework

Authors: Payal Abichandani, Rishi Prakash, Paras Nath Barwal, B. K. Murthy

Abstract:

Information generated from various computerization processes is a potential rich source of knowledge for its designated community. To pass this information from generation to generation without modifying the meaning is a challenging activity. To preserve and archive the data for future generations it’s very essential to prove the authenticity of the data. It can be achieved by extracting the metadata from the data which can prove the authenticity and create trust on the archived data. Subsequent challenge is the technology obsolescence. Metadata extraction and standardization can be effectively used to resolve and tackle this problem. Metadata can be categorized at two levels i.e. Technical and Domain level broadly. Technical metadata will provide the information that can be used to understand and interpret the data record, but only this level of metadata isn’t sufficient to create trustworthiness. We have developed a tool which will extract and standardize the technical as well as domain level metadata. This paper is about the different features of the tool and how we have developed this.

Keywords: digital preservation, metadata, OAIS, PDI, XML

Procedia PDF Downloads 370
14354 Recovery of Essential Oil from Zingiber Officinale Var. Bentong Using Ultrasound Assisted-Supercritical Carbon Dioxide Extraction

Authors: Norhidayah Suleiman, Afza Zulfaka

Abstract:

Zingiber officinale var. Bentong has been identified as the source of high added value compound specifically gingerol-related compounds. The extraction of the high-value compound using conventional method resulted in low yield and time consumption. Hence, the motivation for this work is to investigate the effect of the extraction technique on the essential oil from Zingiber officinale var. Bentong rhizome for commercialization purpose in many industries namely, functional food, pharmaceutical, and cosmeceutical. The investigation begins with a pre-treatment using ultrasound assisted in order to enhance the recovery of essential oil. It was conducted at a fixed frequency (20 kHz) of ultrasound with various time (10, 20, 40 min). The extraction using supercritical carbon dioxide (scCO2) were carried out afterward at a specific condition of temperature (50 °C) and pressure (30 MPa). scCO2 extraction seems to be a promising sustainable green method for the extraction of essential oil due to the benefits that CO2 possesses. The expected results demonstrated the ultrasound-assisted-scCO2 produces a higher yield of essential oil compared to solely scCO2 extraction. This research will provide important features for its application in food supplements or phytochemical preparations.

Keywords: essential oil, scCO2, ultrasound assisted, Zingiber officinale Var. Bentong

Procedia PDF Downloads 109
14353 A Conglomerate of Multiple Optical Character Recognition Table Detection and Extraction

Authors: Smita Pallavi, Raj Ratn Pranesh, Sumit Kumar

Abstract:

Information representation as tables is compact and concise method that eases searching, indexing, and storage requirements. Extracting and cloning tables from parsable documents is easier and widely used; however, industry still faces challenges in detecting and extracting tables from OCR (Optical Character Recognition) documents or images. This paper proposes an algorithm that detects and extracts multiple tables from OCR document. The algorithm uses a combination of image processing techniques, text recognition, and procedural coding to identify distinct tables in the same image and map the text to appropriate the corresponding cell in dataframe, which can be stored as comma-separated values, database, excel, and multiple other usable formats.

Keywords: table extraction, optical character recognition, image processing, text extraction, morphological transformation

Procedia PDF Downloads 121
14352 Overcoming Open Innovation Challenges with Technology Intelligence: Case of Medium-Sized Enterprises

Authors: Akhatjon Nasullaev, Raffaella Manzini, Vincent Frigant

Abstract:

The prior research largely discussed open innovation practices both in large and small and medium-sized enterprises (SMEs). Open Innovation compels firms to observe and analyze the external environment in order to tap new opportunities for inbound and/or outbound flows of knowledge, ideas, work in progress innovations. As SMEs are different from their larger counterparts, they face several limitations in utilizing open innovation activities, such as resource scarcity, unstructured innovation processes and underdeveloped innovation capabilities. Technology intelligence – the process of systematic acquisition, assessment and communication of information about technological trends, opportunities and threats can mitigate this limitation by enabling SMEs to identify technological and market opportunities in timely manner and undertake sound decisions, as well as to realize a ‘first mover advantage’. Several studies highlighted firm-level barriers to successful implementation of open innovation practices in SMEs, namely challenges in partner selection, intellectual property rights and trust, absorptive capacity. This paper aims to investigate the question how technology intelligence can be useful for SMEs to overcome the barriers to effective open innovation. For this, we conduct a case study in four Estonian life-sciences SMEs. Our findings revealed that technology intelligence can support SMEs not only in inbound open innovation (taking into account inclination of most firms toward technology exploration aspects of open innovation) but also outbound open innovation. Furthermore, the results of this study state that, although SMEs conduct technology intelligence in unsystematic and uncoordinated manner, it helped them to increase their innovative performance.

Keywords: technology intelligence, open innovation, SMEs, life sciences

Procedia PDF Downloads 150
14351 Graph-Based Semantical Extractive Text Analysis

Authors: Mina Samizadeh

Abstract:

In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.

Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis

Procedia PDF Downloads 46
14350 Extraction of Strontium Ions through Ligand Assisted Ionic Liquids

Authors: Pradeep Kumar, Abhishek Kumar Chandra, Ashok Khanna

Abstract:

Extraction of Strontium by crown ether (DCH18C6) hasbeen investigated in [BMIM][TF2N] Ionic Liquid (IL) giving higher extraction ~98% and distribution ratio as compared to other organic solvents (Dodecane, Hexane, & Isodecyl alcohol + Dodecane). Distribution ratio of Sr in IL at 0.15M DCH18C6 indicates an enhancement of 20000, 2000, 500 times over Dodecane, Hexane and 5% Isodecyl Alcohol + 95 % Dodecane at 0.01M aqueous acidity respectively. In presence of IL, Sr extraction decreases with increase in HNO3 concentration in aqueous phase whereas opposite trend was observed with organic solvents.Extraction of Sr initially increases with increase in DCH18C6 concentration in IL, finally reaching an asymptotic constant.

Keywords: distribution ratio, ionic liquid, ligand, organic solvent, stripping

Procedia PDF Downloads 417
14349 Response Surface Methodology for the Optimization of Sugar Extraction from Phoenix dactylifera L.

Authors: Lila Boulekbache-Makhlouf, Kahina Djaoud, Myriam Tazarourte, Samir Hadjal, Khodir Madani

Abstract:

In Algeria, important quantities of secondary date variety (Phoenix dactylifera L.) are generated in each campaign; their chemical composition is similar to that of commercial dates. The present work aims to valorize this common date variety (Degla-Beida) which is often poorly exploited. In this context, we tried to prepare syrup from the secondary date variety and to evaluate the effect of conventional extraction (CE) or water bath extraction (WBE) and alternative extraction (microwaves assisted extraction (MAE), and ultrasounds assisted extraction (UAE)) on its total sugar content (TSC), using response surface methodology (RSM). Then, the analysis of individual sugars was performed by high-performance liquid chromatography (HPLC). Maximum predicted TSC recoveries under the optimized conditions for MAE, UAE and CE were 233.248 ± 3.594 g/l, 202.889 ± 5.797 g/l, and 233.535 ± 5.412 g/l, respectively, which were close to the experimental values: 233.796 ± 1.898 g/l; 202.037 ± 3.401 g/l and 234.380 ± 2.425 g/l. HPLC analysis revealed high similarity in the sugar composition of date juices obtained by MAE (60.11% sucrose, 16.64% glucose and 23.25% fructose) and CE (50.78% sucrose, 20.67% glucose and 28.55% fructose), although a large difference was detected for that obtained by UAE (0.00% sucrose, 46.94% glucose and 53.06% fructose). Microwave-assisted extraction was the best method for the preparation of date syrup with an optimal recovery of total sugar content. However, ultrasound-assisted extraction was the best one for the preparation of date syrup with high content of reducing sugars.

Keywords: dates, extraction, RSM, sugars, syrup

Procedia PDF Downloads 131
14348 MapReduce Algorithm for Geometric and Topological Information Extraction from 3D CAD Models

Authors: Ahmed Fradi

Abstract:

In a digital world in perpetual evolution and acceleration, data more and more voluminous, rich and varied, the new software solutions emerged with the Big Data phenomenon offer new opportunities to the company enabling it not only to optimize its business and to evolve its production model, but also to reorganize itself to increase competitiveness and to identify new strategic axes. Design and manufacturing industrial companies, like the others, face these challenges, data represent a major asset, provided that they know how to capture, refine, combine and analyze them. The objective of our paper is to propose a solution allowing geometric and topological information extraction from 3D CAD model (precisely STEP files) databases, with specific algorithm based on the programming paradigm MapReduce. Our proposal is the first step of our future approach to 3D CAD object retrieval.

Keywords: Big Data, MapReduce, 3D object retrieval, CAD, STEP format

Procedia PDF Downloads 517
14347 The Effect of Ionic Strength on the Extraction of Copper(II) from Perchlorate Solutions by Capric Acid in Chloroform

Authors: A. Bara, D. Barkat

Abstract:

The liquid-liquid extraction of copper (II) from aqueous solution by capric acid (HL) in chloroform at 25°C has been studied. The ionic strength effect of the aqueous phase shows that the extraction of copper(II) increases with the increase in ionic strength. with different ionic strengths 1, 0.5, 0.25, 0.125 and 0.1M in the aqueous phase. Cu (II) is extracted as the complex CuL2(ClO4).

Keywords: liquid-liquid extraction, ionic strength, copper (II), capric acid

Procedia PDF Downloads 509
14346 The Effect of Different Extraction Techniques on the Yield and the Composition of Oil (Laurus Nobilis L.) Fruits Widespread in Syria

Authors: Khaled Mawardi

Abstract:

Bay laurel (Laurus nobilis L.) is an evergreen of the Laurus genus of the Lauraceae Family. It is a plant native to the southern Mediterranean and widespread in Syria. It is a plant with enormous industrial applications. For instance, they are used as platform chemicals in food, pharmaceutical and cosmetic applications. Herein, we report an efficient extraction of Bay laurel oil from Bay laurel fruits via a comparative investigation of boiled water conventional extraction technique and microwave-assisted extraction (MAE) by microwave heating at atmospheric pressure. In order to optimize the extraction efficiency, we investigated several extraction parameters, such as extraction time and microwave power. In addition, to demonstrate the feasibility of the method, oil obtained under optimal conditions by method (MAE) was compared quantitatively and qualitatively with that obtained by the conventional method. After 1h of microwave-assisted extraction (power of 600W), an oil yield of 9.8% with identified lauric acid content of 22.7%. In comparison, an extended extraction of up to 4h was required to obtain a 9.7% yield of oil extraction with 21.2% of lauric acid content. The change in microwave power impacts the fatty acids profile and also the quality parameters of Laurel Oil. It was found that the profile of fatty acids changed with the power, where the lauric acid content increased from 22.7% at 600W to 30.5% at 1200W owing to a decrease of oleic acid content from 32.8% at 600W to 28.3% at 1200W and linoleic acid content from 22.3% at 600W to 20.6% at 1200W. In addition, we observed a decrease in oil yield from 9.8% at 600W to 5.1% at 1200W. Summarily, the overall results indicated that the extraction of laurel fruit oils could be successfully performed using (MAE) at a short extraction time and lower energy compared with the fixed oil obtained by conventional processes of extraction. Microwave heating exerted more aggressive effects on the oil. Indeed, microwave heating inflicted changes in the fatty acids profile of oil; the most affected fraction was the unsaturated fatty acids, with higher susceptibility to oxidation.

Keywords: microwaves, extraction, Laurel oil, solvent-free

Procedia PDF Downloads 46
14345 Effect of Ultrasound on Carotenoids Extraction from Pepper and Process Optimization Using Response Surface Methodology (RSM)

Authors: Elham Mahdian, Reza Karazhian, Rahele Dehghan Tanha

Abstract:

Pepper (Capsicum annum L.) which belong to the family Solananceae, are known for their versatility as a vegetable crop and are consumed both as fresh vegetables or dehydrated for spices. Pepper is considered an excellent source of bioactive nutrients. Ascorbic acid, carotenoids and phenolic compounds are its main antioxidant constituents. Ultrasound assisted extraction is an inexpensive, simple and efficient alternative to conventional extraction techniques. The mechanism of action for ultrasound-assisted extraction are attributed to cavitations, mechanical forces and thermal impact, which result in disruption of cells walls, reduce particle size, and enhance mass transfer across cell membranes. In this study, response surface methodology was used to optimize experimental conditions for ultrasonic assisted extraction of carotenoid compounds from Chili peppers. Variables were included extraction temperatures at 3 levels (30, 40 and 50 °C), extraction times at 3 levels (10, 25 and 40 minutes) and power at 3 levels (30, 60 and 90 %). It was observed that ultrasound waves applied at temperature of 49°C, time of 10 minutes and power 89 % resulted to the highest carotenoids contents (lycopene and β-carotene), while the lowest value was recorded in the control. Thus, results showed that ultrasound waves have strong impact on extraction of carotenoids from pepper.

Keywords: carotenoids, optimization, pepper, response surface methodology

Procedia PDF Downloads 439
14344 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features

Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi

Abstract:

Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.

Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation

Procedia PDF Downloads 689
14343 Research and Development of Net-Centric Information Sharing Platform

Authors: Wang Xiaoqing, Fang Youyuan, Zheng Yanxing, Gu Tianyang, Zong Jianjian, Tong Jinrong

Abstract:

Compared with traditional distributed environment, the net-centric environment brings on more demanding challenges for information sharing with the characteristics of ultra-large scale and strong distribution, dynamic, autonomy, heterogeneity, redundancy. This paper realizes an information sharing model and a series of core services, through which provides an open, flexible and scalable information sharing platform.

Keywords: net-centric environment, information sharing, metadata registry and catalog, cross-domain data access control

Procedia PDF Downloads 541
14342 Oil Extraction from Sunflower Seed Using Green Solvent 2-Methyltetrahydrofuran and Isoamyl Alcohol

Authors: Sergio S. De Jesus, Aline Santana, Rubens Maciel Filho

Abstract:

The objective of this study was to choose and determine a green solvent system with similar extraction efficiencies as the traditional Bligh and Dyer method. Sunflower seed oil was extracted using Bligh and Dyer method with 2-methyltetrahydrofuran and isoamyl using alcohol ratios of 1:1; 2:1; 3:1; 1:2; 3:1. At the same time comparative experiments was performed with chloroform and methanol ratios of 1:1; 2:1; 3:1; 1:2; 3:1. Comparison study was done using 5 replicates (n=5). Statistical analysis was performed using Microsoft Office Excel (Microsoft, USA) to determine means and Tukey’s Honestly Significant Difference test for comparison between treatments (α = 0.05). The results showed that using classic method with methanol and chloroform presented the extraction oil yield with the values of 31-44% (w/w) and values of 36-45% (w/w) using green solvents for extractions. Among the two extraction methods, 2 methyltetrahydrofuran and isoamyl alcohol ratio 2:1 provided the best results (45% w/w), while the classic method using chloroform and methanol with ratio of 3:1 presented a extraction oil yield of 44% (w/w). It was concluded that the proposed extraction method using 2-methyltetrahydrofuran and isoamyl alcohol in this work allowed the same efficiency level as chloroform and methanol.

Keywords: extraction, green solvent, lipids, sugarcane

Procedia PDF Downloads 351