Search results for: small text extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7657

Search results for: small text extraction

7237 Extraction and Analysis of Anthocyanins Contents from Different Stage Flowers of the Orchids Dendrobium Hybrid cv. Ear-Sakul

Authors: Orose Rugchati, Khumthong Mahawongwiriya

Abstract:

Dendrobium hybrid cv. Ear-Sakul has become one of the important commercial commodities in Thailand agricultural industry worldwide, either as potted plants or as cut flowers due to the attractive color produced in flower petals. Anthocyanins are the main flower pigments and responsible for the natural attractive display of petal colors. These pigments play an important role in functionality, such as to attract animal pollinators, classification, and grading of these orchids. Dendrobium hybrid cv. Ear-Sakul has been collected from local area farm in different stage flowers (F1, F2-F5, and F6). Anthocyanins pigment were extracted from the fresh flower by solvent extraction (MeOH–TFA 99.5:0.5v/v at 4ºC) and purification with ethyl acetate. The main anthocyanins components are cyanidin, pelargonidin, and delphinidin. Pure anthocyanin contents were analysis by UV-Visible spectroscopy technique at λ max 535, 520 and 546 nm respectively. The anthocyanins contents were converted in term of monomeric anthocyanins pigment (mg/L). The anthocyanins contents of all sample were compared with standard pigments cyanidin, pelargonidin and delphinidin. From this experiment is a simple extraction and analysis anthocyanins content in different stage of flowers results shown that monomeric anthocyanins pigment contents of different stage flowers (F1, F2-F5 and F6 ): cyanidin – 3 – glucoside (mg/l) are 0.85+0.08, 24.22+0.12 and 62.12+0.6; Pelargonidin 3,5-di- glucoside(mg/l) 10.37+0.12, 31.06+0.8 and 81.58+ 0.5; Delphinidin (mg/l) 6.34+0.17, 18.98+0.56 and 49.87+0.7; and the appearance of extraction pure anthocyanins in L(a, b): 2.71(1.38, -0.48), 1.06(0.39,-0.66) and 2.64(2.71,-3.61) respectively. Dendrobium Hybrid cv. Ear-Sakul could be used as a source of anthocyanins by simple solvent extraction and stage of flowers as a guideline for the prediction amount of main anthocyanins components are cyanidin, pelargonidin, and delphinidin could be application and development in quantities, and qualities with the advantage for food pharmaceutical and cosmetic industries.

Keywords: analysis, anthocyanins contents, different stage flowers, Dendrobium Hybrid cv. Ear-Sakul

Procedia PDF Downloads 132
7236 How Is a Machine-Translated Literary Text Organized in Coherence? An Analysis Based upon Theme-Rheme Structure

Authors: Jiang Niu, Yue Jiang

Abstract:

With the ultimate goal to automatically generate translated texts with high quality, machine translation has made tremendous improvements. However, its translations of literary works are still plagued with problems in coherence, esp. the translation between distant language pairs. One of the causes of the problems is probably the lack of linguistic knowledge to be incorporated into the training of machine translation systems. In order to enable readers to better understand the problems of machine translation in coherence, to seek out the potential knowledge to be incorporated, and thus to improve the quality of machine translation products, this study applies Theme-Rheme structure to examine how a machine-translated literary text is organized and developed in terms of coherence. Theme-Rheme structure in Systemic Functional Linguistics is a useful tool for analysis of textual coherence. Theme is the departure point of a clause and Rheme is the rest of the clause. In a text, as Themes and Rhemes may be connected with each other in meaning, they form thematic and rhematic progressions throughout the text. Based on this structure, we can look into how a text is organized and developed in terms of coherence. Methodologically, we chose Chinese and English as the language pair to be studied. Specifically, we built a comparable corpus with two modes of English translations, viz. machine translation (MT) and human translation (HT) of one Chinese literary source text. The translated texts were annotated with Themes, Rhemes and their progressions throughout the texts. The annotated texts were analyzed from two respects, the different types of Themes functioning differently in achieving coherence, and the different types of thematic and rhematic progressions functioning differently in constructing texts. By analyzing and contrasting the two modes of translations, it is found that compared with the HT, 1) the MT features “pseudo-coherence”, with lots of ill-connected fragments of information using “and”; 2) the MT system produces a static and less interconnected text that reads like a list; these two points, in turn, lead to the less coherent organization and development of the MT than that of the HT; 3) novel to traditional and previous studies, Rhemes do contribute to textual connection and coherence though less than Themes do and thus are worthy of notice in further studies. Hence, the findings suggest that Theme-Rheme structure be applied to measuring and assessing the coherence of machine translation, to being incorporated into the training of the machine translation system, and Rheme be taken into account when studying the textual coherence of both MT and HT.

Keywords: coherence, corpus-based, literary translation, machine translation, Theme-Rheme structure

Procedia PDF Downloads 181
7235 Development of Fake News Model Using Machine Learning through Natural Language Processing

Authors: Sajjad Ahmed, Knut Hinkelmann, Flavio Corradini

Abstract:

Fake news detection research is still in the early stage as this is a relatively new phenomenon in the interest raised by society. Machine learning helps to solve complex problems and to build AI systems nowadays and especially in those cases where we have tacit knowledge or the knowledge that is not known. We used machine learning algorithms and for identification of fake news; we applied three classifiers; Passive Aggressive, Naïve Bayes, and Support Vector Machine. Simple classification is not completely correct in fake news detection because classification methods are not specialized for fake news. With the integration of machine learning and text-based processing, we can detect fake news and build classifiers that can classify the news data. Text classification mainly focuses on extracting various features of text and after that incorporating those features into classification. The big challenge in this area is the lack of an efficient way to differentiate between fake and non-fake due to the unavailability of corpora. We applied three different machine learning classifiers on two publicly available datasets. Experimental analysis based on the existing dataset indicates a very encouraging and improved performance.

Keywords: fake news detection, natural language processing, machine learning, classification techniques.

Procedia PDF Downloads 136
7234 The Effect of Microfinance on Labor Productivity of SME - The Case of Iran

Authors: Sayyed Abdolmajid Jalaee Esfand Abadi, Sepideh Samimi

Abstract:

Since one of the major difficulties to develop small manufacturing enterpriser in developing countries is the limitations of financing activities, this paper want to answer the question: “what is the role and status of micro finance in improving the labor productivity of small industries in Iran?” The results of panel data estimation show that micro finance in Iran has not yet been able to work efficiently and provide the required credit and investment. Also, reducing economy’s dependence on oil revenues reduced and increasing its reliance on domestic production and exports of industrial production can increase the productivity of workforce in Iranian small industries.

Keywords: microfinance, small manufacturing enterprises (SME), workforce productivity, Iran, panel data

Procedia PDF Downloads 400
7233 Field-Programmable Gate Arrays Based High-Efficiency Oriented Fast and Rotated Binary Robust Independent Elementary Feature Extraction Method Using Feature Zone Strategy

Authors: Huang Bai-Cheng

Abstract:

When deploying the Oriented Fast and Rotated Binary Robust Independent Elementary Feature (BRIEF) (ORB) extraction algorithm on field-programmable gate arrays (FPGA), the access of global storage for 31×31 pixel patches of the features has become the bottleneck of the system efficiency. Therefore, a feature zone strategy has been proposed. Zones are searched as features are detected. Pixels around the feature zones are extracted from global memory and distributed into patches corresponding to feature coordinates. The proposed FPGA structure is targeted on a Xilinx FPGA development board of Zynq UltraScale+ series, and multiple datasets are tested. Compared with the streaming pixel patch extraction method, the proposed architecture obtains at least two times acceleration consuming extra 3.82% Flip-Flops (FFs) and 7.78% Look-Up Tables (LUTs). Compared with the non-streaming one, the proposed architecture saves 22.3% LUT and 1.82% FF, causing a latency of only 0.2ms and a drop in frame rate for 1. Compared with the related works, the proposed strategy and hardware architecture have the superiority of keeping a balance between FPGA resources and performance.

Keywords: feature extraction, real-time, ORB, FPGA implementation

Procedia PDF Downloads 97
7232 SFE as a Superior Technique for Extraction of Eugenol-Rich Fraction from Cinnamomum tamala Nees (Bay Leaf) - Process Analysis and Phytochemical Characterization

Authors: Sudip Ghosh, Dipanwita Roy, Dipan Chatterjee, Paramita Bhattacharjee, Satadal Das

Abstract:

Highest yield of eugenol-rich fractions from Cinnamomum tamala (bay leaf) leaves were obtained by supercritical carbon dioxide (SC-CO2), compared to hydro-distillation, organic solvents, liquid CO2 and subcritical CO2 extractions. Optimization of SC-CO2 extraction parameters was carried out to obtain an extract with maximum eugenol content. This was achieved using a sample size of 10 g at 55°C, 512 bar after 60 min at a flow rate of 25.0 cm3/sof gaseous CO2. This extract has the best combination of phytochemical properties such as phenolic content (1.77 mg gallic acid/g dry bay leaf), reducing power (0.80 mg BHT/g dry bay leaf), antioxidant activity (IC50 of 0.20 mg/ml) and anti-inflammatory potency (IC50 of 1.89 mg/ml). Identification of compounds in this extract was performed by GC-MS analysis and its antimicrobial potency was also evaluated. The MIC values against E. coli, P. aeruginosa and S. aureus were 0.5, 0.25 and 0.5 mg/ml, respectively.

Keywords: antimicrobial potency, Cinnamomum tamala, eugenol, supercritical carbon dioxide extraction

Procedia PDF Downloads 313
7231 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 286
7230 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 169
7229 Cooperative Diversity Scheme Based on MIMO-OFDM in Small Cell Network

Authors: Dong-Hyun Ha, Young-Min Ko, Chang-Bin Ha, Hyoung-Kyu Song

Abstract:

In Heterogeneous network (HetNet) can provide high quality of a service in a wireless communication system by composition of small cell networks. The composition of small cell networks improves cell coverage and capacity to the mobile users.Recently, various techniques using small cell networks have been researched in the wireless communication system. In this paper, the cooperative scheme obtaining high reliability is proposed in the small cell networks. The proposed scheme suggests a cooperative small cell system and the new signal transmission technique in the proposed system model. The new signal transmission technique applies a cyclic delay diversity (CDD) scheme based on the multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) system to obtain improved performance. The improved performance of the proposed scheme is confirmed by the simulation results.

Keywords: adaptive transmission, cooperative communication, diversity gain, OFDM

Procedia PDF Downloads 475
7228 An Attempt at the Multi-Criterion Classification of Small Towns

Authors: Jerzy Banski

Abstract:

The basic aim of this study is to discuss and assess different classifications and research approaches to small towns that take their social and economic functions into account, as well as relations with surrounding areas. The subject literature typically includes three types of approaches to the classification of small towns: 1) the structural, 2) the location-related, and 3) the mixed. The structural approach allows for the grouping of towns from the point of view of the social, cultural and economic functions they discharge. The location-related approach draws on the idea of there being a continuum between the center and the periphery. A mixed classification making simultaneous use of the different approaches to research brings the most information to bear in regard to categories of the urban locality. Bearing in mind the approaches to classification, it is possible to propose a synthetic method for classifying small towns that takes account of economic structure, location and the relationship between the towns and their surroundings. In the case of economic structure, the small centers may be divided into two basic groups – those featuring a multi-branch structure and those that are specialized economically. A second element of the classification reflects the locations of urban centers. Two basic types can be identified – the small town within the range of impact of a large agglomeration, or else the town outside such areas, which is to say located peripherally. The third component of the classification arises out of small towns’ relations with their surroundings. In consequence, it is possible to indicate 8 types of small-town: from local centers enjoying good accessibility and a multi-branch economic structure to peripheral supra-local centers characterised by a specialized economic structure.

Keywords: small towns, classification, functional structure, localization

Procedia PDF Downloads 165
7227 Protein Isolates from Chickpea (Cicer arietinum L.) and Its Application in Cake

Authors: Mohamed Abdullah Ahmed

Abstract:

In a study of chickpea protein isolate (CPI) preparation, the wet alkaline extraction was carried out. The objectives were to determine the optimal extracting conditions of CPI and apply CPI into a sponge cake recipe to replace egg and make acceptable product. The design used in extraction was a central composite design. The response surface methodology was preferred to graphically express the relationship between extraction time and pH with the output variables of percent yield and protein content of CPI. It was noted that optimal extracting conditions were 60 min and pH 10.5 resulting in 90.07% protein content and 89.15% yield of CPI. The protein isolate (CPI) could be incorporated in cake to 20% without adversely affecting the cake physical properties such as cake hardness and sensory attributes. The higher protein content in cake was corresponding to the amount of CPI added. Therefore, adding CPI can significantly (p<0.05) increase protein content in cake. However, sensory evaluation showed that adding more than 20% of CPI decreased the overall acceptability. The results of this investigation could be used as a basic knowledge of CPI utilization in other food products.

Keywords: chick bean protein isolate, sponge cake, utilization, sponge

Procedia PDF Downloads 344
7226 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 236
7225 The Effect of Supercritical Carbon Dioxide Process Variables on The Recovery of Extracts from Bentong Ginger: Study on Process Variables

Authors: Muhamad Syafiq Hakimi Kamaruddin, Norhidayah Suleiman

Abstract:

Ginger extracts (Zingiber officinale Rosc.) have been attributed therapeutic properties primarily as antioxidant, anticancer, and anti-inflammatory properties. Conventional extractions including Soxhlet and maceration are commonly used to extract the bioactive compounds from plant material. Nevertheless, high energy consumption and being non-environmentally friendly are the predominant limitations of the conventional extractions method. Herein, green technology, namely supercritical carbon dioxide (scCO2) extraction, is used to study process variables' effects on extract yields. Herein, green technology, namely supercritical carbon dioxide (scCO2) extraction, is used to study process variables' effects on extract yields. A pressure (10-30 MPa), temperature (40-60 °C), and median particle size (300-600 µm) were conducted at a CO2 flow rate of 0.9 ± 0.2 g/min for 120 mins. The highest overall yield was 4.58% obtained by the scCO2 extraction conditions of 300 bar and 60 °C with 300µm of ginger powder for 120 mins. In comparison, the yield of the extract was increased considerably within a short extraction time. The results show that scCO2 has a remarkable ability over ginger extract and is a promising technology for extracting bioactive compounds from plant material.

Keywords: conventional, ginger, non-environmentally, supercritical carbon dioxide, technology

Procedia PDF Downloads 95
7224 Evaluation of Two DNA Extraction Methods for Minimal Porcine (Pork) Detection in Halal Food Sample Mixture Using Taqman Real-time PCR Technique

Authors: Duaa Mughal, Syeda Areeba Nadeem, Shakil Ahmed, Ishtiaq Ahmed Khan

Abstract:

The identification of porcine DNA in Halal food items is critical to ensuring compliance with dietary restrictions and religious beliefs. In Islam, Porcine is prohibited as clearly mentioned in Quran (Surah Al-Baqrah, Ayat 173). The purpose of this study was to compare two DNA extraction procedures for detecting 0.001% of porcine DNA in processed Halal food sample mixtures containing chicken, camel, veal, turkey and goat meat using the TaqMan Real-Time PCR technology. In this research, two different commercial kit protocols were compared. The processed sample mixtures were prepared by spiking known concentration of porcine DNA to non-porcine food matrices. Afterwards, TaqMan Real-Time PCR technique was used to target a particular porcine gene from the extracted DNA samples, which was quantified after extraction. The results of the amplification were evaluated for sensitivity, specificity, and reproducibility. The results of the study demonstrated that two DNA extraction techniques can detect 0.01% of porcine DNA in mixture of Halal food samples. However, as compared to the alternative approach, Eurofins| GeneScan GeneSpin DNA Isolation kit showed more effective sensitivity and specificity. Furthermore, the commercial kit-based approach showed great repeatability with minimal variance across repeats. Quantification of DNA was done by using fluorometric assay. In conclusion, the comparison of DNA extraction methods for detecting porcine DNA in Halal food sample mixes using the TaqMan Real-Time PCR technology reveals that the commercial kit-based approach outperforms the other methods in terms of sensitivity, specificity, and repeatability. This research helps to promote the development of reliable and standardized techniques for detecting porcine DNA in Halal food items, religious conformity and assuring nutritional.

Keywords: real time PCR (qPCR), DNA extraction, porcine DNA, halal food authentication, religious conformity

Procedia PDF Downloads 43
7223 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 158
7222 Video Shot Detection and Key Frame Extraction Using Faber-Shauder DWT and SVD

Authors: Assma Azeroual, Karim Afdel, Mohamed El Hajji, Hassan Douzi

Abstract:

Key frame extraction methods select the most representative frames of a video, which can be used in different areas of video processing such as video retrieval, video summary, and video indexing. In this paper we present a novel approach for extracting key frames from video sequences. The frame is characterized uniquely by his contours which are represented by the dominant blocks. These dominant blocks are located on the contours and its near textures. When the video frames have a noticeable changement, its dominant blocks changed, then we can extracte a key frame. The dominant blocks of every frame is computed, and then feature vectors are extracted from the dominant blocks image of each frame and arranged in a feature matrix. Singular Value Decomposition is used to calculate sliding windows ranks of those matrices. Finally the computed ranks are traced and then we are able to extract key frames of a video. Experimental results show that the proposed approach is robust against a large range of digital effects used during shot transition.

Keywords: FSDWT, key frame extraction, shot detection, singular value decomposition

Procedia PDF Downloads 371
7221 The Use of Punctuation by Primary School Students Writing Texts Collaboratively: A Franco-Brazilian Comparative Study

Authors: Cristina Felipeto, Catherine Bore, Eduardo Calil

Abstract:

This work aims to analyze and compare the punctuation marks (PM) in school texts of Brazilian and French students and the comments on these PM made spontaneously by the students during the ongoing text. Assuming textual genetics as an investigative field within a dialogical and enunciative approach, we defined a common methodological design in two 1st year classrooms (7 years old) of the primary school, one classroom in Brazil (Maceio) and the other one in France (Paris). Through a multimodal capture system of writing processes in real time and space (Ramos System), we recorded the collaborative writing proposal in dyads in each of the classrooms. This system preserves the classroom’s ecological characteristics and provides a video recording synchronized with dialogues, gestures and facial expressions of the students, the stroke of the pen’s ink on the sheet of paper and the movement of the teacher and students in the classroom. The multimodal register of the writing process allowed access to the text in progress and the comments made by the students on what was being written. In each proposed text production, teachers organized their students in dyads and requested that they should talk, combine and write a fictional narrative. We selected a Dyad of Brazilian students (BD) and another Dyad of French students (FD) and we have filmed 6 proposals for each of the dyads. The proposals were collected during the 2nd Term of 2013 (Brazil) and 2014 (France). In 6 texts written by the BD there were identified 39 PMs and 825 written words (on average, a PM every 23 words): Of these 39 PMs, 27 were highlighted orally and commented by either student. In the texts written by the FD there were identified 48 PMs and 258 written words (on average, 1 PM every 5 words): Of these 48 PM, 39 were commented by the French students. Unlike what the studies on punctuation acquisition point out, the PM that occurred the most were hyphens (BD) and commas (FD). Despite the significant difference between the types and quantities of PM in the written texts, the recognition of the need for writing PM in the text in progress and the comments have some common characteristics: i) the writing of the PM was not anticipated in relation to the text in progress, then they were added after the end of a sentence or after the finished text itself; ii) the need to add punctuation marks in the text came after one of the students had ‘remembered’ that a particular sign was needed; iii) most of the PM inscribed were not related to their linguistic functions, but the graphic-visual feature of the text; iv) the comments justify or explain the PM, indicating metalinguistic reflections made by the students. Our results indicate how the comments of the BD and FD express the dialogic and subjective nature of knowledge acquisition. Our study suggests that the initial learning of PM depends more on its graphic features and interactional conditions than on its linguistic functions.

Keywords: collaborative writing, erasure, graphic marks, learning, metalinguistic awareness, textual genesis

Procedia PDF Downloads 142
7220 Potential Application of Modified Diglycolamide Resin for Rare Earth Element Extraction

Authors: Junnile Romero, Ilhwan Park, Vannie Joy Resabal, Carlito Tabelin, Richard Alorro, Leaniel Silva, Joshua Zoleta, Takunda Mandu, Kosei Aikawa, Mayumi Ito, Naoki Hiroyoshi

Abstract:

Rare earth elements (REE) play a vital role in technological advancement due to their unique physical and chemical properties essential for various renewable energy applications. However, this increasing demand represents a challenging task for sustainability that corresponds to various research interests relating to the development of various extraction techniques, particularly on the extractant being used. In this study, TK221 (a modified polymer resin containing diglycolamide, carbamoyl methyl phosphine oxide (CMPO), and diglycolamide (DGA-N)) has been investigated as a conjugate extractant. FTIR and SEM analysis results confirmed the presence of CMPO and DGA-N being coated onto the PS-DVB support of TK221. Moreover, the kinetic rate law and adsorption isotherm batch test was investigated to understand the corresponding adsorption mechanism. The results show that REEs’ (Nd, Y, Ce, and Er) obtained pseudo-second-order kinetics and Langmuir isotherm, suggesting that the adsorption mechanism undergoes a single monolayer adsorption site via a chemisorption process. The Qmax values of Nd, Ce, Er, Y, and Fe were 45.249 mg/g, 43.103 mg/g, 35.088 mg/g, 15.552 mg/g, and 12.315 mg/g, respectively. This research further suggests that TK221 polymer resin can be used as an alternative absorbent material for an effective REE extraction.

Keywords: rare earth element, diglycolamide, characterization, extraction resin

Procedia PDF Downloads 91
7219 Optimization of Poly-β-Hydroxybutyrate Recovery from Bacillus Subtilis Using Solvent Extraction Process by Response Surface Methodology

Authors: Jayprakash Yadav, Nivedita Patra

Abstract:

Polyhydroxybutyrate (PHB) is an interesting material in the field of medical science, pharmaceutical industries, and tissue engineering because of its properties such as biodegradability, biocompatibility, hydrophobicity, and elasticity. PHB is naturally accumulated by several microbes in their cytoplasm during the metabolic process as energy reserve material. PHB can be extracted from cell biomass using halogenated hydrocarbons, chemicals, and enzymes. In this study, a cheaper and non-toxic solvent, acetone, was used for the extraction process. The different parameters like acetone percentage, and solvent pH, process temperature, and incubation periods were optimized using the Response Surface Methodology (RSM). RSM was performed and the determination coefficient (R2) value was found to be 0.8833 from the quadratic regression model with no significant lack of fit. The designed RSM model results indicated that the fitness of the response variable was significant (P-value < 0.0006) and satisfactory to denote the relationship between the responses in terms of PHB recovery and purity with respect to the values of independent variables. Optimum conditions for the maximum PHB recovery and purity were found to be solvent pH 7, extraction temperature - 43 °C, incubation time - 70 minutes, and percentage acetone – 30 % from this study. The maximum predicted PHB recovery was found to be 0.845 g/g biomass dry cell weight and the purity was found to be 97.23 % using the optimized conditions.

Keywords: acetone, PHB, RSM, halogenated hydrocarbons, extraction, bacillus subtilis.

Procedia PDF Downloads 415
7218 The Specificity of Employee Development in Polish Small Enterprises

Authors: E. Rak

Abstract:

The aim of the paper is to identify some of the specific characteristics of employee development, as observed in the practice of small enterprises in Poland. Results suggest that a sizeable percentage of employers are not interested in improving the development of their employee base. This aspect is often perceived as insignificant. In addition, many employers have no theoretical or practical knowledge of employee development methods. Lack of sufficient financial support is reported as third on the list of the most important barriers to employee development. Employees, on the other hand, typically offload the responsibility of initiating this type of activities onto the employer. Employee development plans are typically flexible and accommodating. The original value offered by this research comes in the form of a detailed characteristics of employee development in small enterprises, accompanied by identification of specificity of human resource development in Polish companies.

Keywords: employee development, human resources development, small enterprises, trainings

Procedia PDF Downloads 343
7217 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 287
7216 The Utilization of Tea Extract within the Realm of the Food Industry

Authors: Raana Babadi Fathipour

Abstract:

Tea, a beverage widely cherished across the globe, has captured the interest of scholars with its recent acknowledgement for possessing noteworthy health advantages. Of particular significance is its proven ability to ward off ailments such as cancer and cardiovascular afflictions. Moreover, within the realm of culinary creations, lipid oxidation poses a significant challenge for food product development. In light of these aforementioned concerns, this present discourse turns its attention towards exploring diverse methodologies employed in extracting polyphenols from various types of tea leaves and examining their utility within the vast landscape of the ever-evolving food industry. Based on the discoveries unearthed in this comprehensive investigation, it has been determined that the fundamental constituents of tea are polyphenols possessed of intrinsic health-enhancing properties. This includes an assortment of catechins, namely epicatechin, epigallocatechin, epicatechin gallate, and epigallocatechin gallate. Moreover, gallic acid, flavonoids, flavonols and theaphlavins have also been detected within this aromatic beverage. Of these myriad components examined vigorously in this study's analysis, catechin emerges as particularly beneficial. Multiple techniques have emerged over time to successfully extract key compounds from tea plants, including solvent-based extraction methodologies, microwave-assisted water extraction approaches and ultrasound-assisted extraction techniques. In particular, consideration is given to microwave-assisted water extraction method as a viable scheme which effectively procures valuable polyphenols from tea extracts. This methodology appears adaptable for implementation within sectors such as dairy production along with meat and oil industries alike.

Keywords: camellia sinensis, extraction, food application, shelf life, tea

Procedia PDF Downloads 50
7215 Intertextuality as a Dialogue Between Postmodern Writer J. Fowles and Mid-English Writer J. Donne

Authors: Isahakyan Heghine

Abstract:

Intertextuality, being in the centre of attention of both linguists and literary critics, is vividly expressed in the outstanding British novelist and philosopher J. Fowles' works. 'The Magus’ is a deep psychological and philosophical novel with vivid intertextual links with the Greek mythology and authors from different epochs. The aim of the paper is to show how intertextuality might serve as a dialogue between two authors (J. Fowles and J. Donne) disguised in the dialogue of two protagonists of the novel : Conchis and Nicholas. Contrastive viewpoints concerning man's isolation, loneliness are stated in the dialogue. Due to the conceptual analysis of the text it becomes possible both to decode the conceptual information of the text and find out its intertextual links.

Keywords: dialogue, conceptual analysis, isolation, intertextuality

Procedia PDF Downloads 307
7214 Functionalized Magnetic Iron Oxide Nanoparticles for Extraction of Protein and Metal Nanoparticles from Complex Fluids

Authors: Meenakshi Verma, Mandeep Singh Bakshi, Kultar Singh

Abstract:

Magnetic nanoparticles have received incredible importance in view of their diverse applications, which arise primarily due to their response to the external magnetic field. The magnetic behaviour of magnetic nanoparticles (NPs) helps them in numerous different ways. The most important amongst them is the ease with which they can be purified and also can be separated from the media in which they are present merely by applying an external magnetic field. This exceptional ease of separation of the magnetic NPs from an aqueous media enables them to use for extracting/removing metal pollutants from complex aqueous medium. Functionalized magnetic NPs can be subjected for the metallic impurities extraction if are favourably adsorbed on the NPs surfaces. We have successfully used the magnetic NPs as vehicles for gold and silver NPs removal from the complex fluids. The NPs loaded with gold and silver NPs pollutant fractions has been easily removed from the aqueous media by using external magnetic field. Similarly, we have used the magnetic NPs for extraction of protein from complex media and then constantly washed with pure water to eliminate the unwanted surface adsorbed components for quantitative estimation. The purified and protein loaded magnetic NPs are best analyzed with SDS Page to not only for characterization but also for separating the protein fractions. A collective review of the results indicates that we have synthesized surfactant coated iron oxide NPs and then functionalized these with selected materials. These surface active magnetic NPs work very well for the extraction of metallic NPs from the aqueous bulk and make the whole process environmentally sustainable. Also, magnetic NPs-Au/Ag/Pd hybrids have excellent protein extracting properties. They are much easier to use in order to extract the magnetic impurities as well as protein fractions under the effect of external magnetic field without any complex conventional purification methods.

Keywords: magnetic nanoparticles, protein, functionalized, extraction

Procedia PDF Downloads 82
7213 Improved Pitch Detection Using Fourier Approximation Method

Authors: Balachandra Kumaraswamy, P. G. Poonacha

Abstract:

Automatic Music Information Retrieval has been one of the challenging topics of research for a few decades now with several interesting approaches reported in the literature. In this paper we have developed a pitch extraction method based on a finite Fourier series approximation to the given window of samples. We then estimate pitch as the fundamental period of the finite Fourier series approximation to the given window of samples. This method uses analysis of the strength of harmonics present in the signal to reduce octave as well as harmonic errors. The performance of our method is compared with three best known methods for pitch extraction, namely, Yin, Windowed Special Normalization of the Auto-Correlation Function and Harmonic Product Spectrum methods of pitch extraction. Our study with artificially created signals as well as music files show that Fourier Approximation method gives much better estimate of pitch with less octave and harmonic errors.

Keywords: pitch, fourier series, yin, normalization of the auto- correlation function, harmonic product, mean square error

Procedia PDF Downloads 391
7212 Surface Modification of TiO2 Layer with Phosphonic Acid Monolayer in Perovskite Solar Cells: Effect of Chain Length and Terminal Functional Group

Authors: Seid Yimer Abate, Ding-Chi Huang, Yu-Tai Tao

Abstract:

In this study, charge extraction characteristics at the perovskite/TiO2 interface in the conventional perovskite solar cell is studied by interface engineering. Self-assembled monolayers of phosphonic acids with different chain length and terminal functional group were used to modify mesoporous TiO2 surface to modulate the surface property and interfacial energy barrier to investigate their effect on charge extraction and transport from the perovskite to the mp-TiO2 and then the electrode. The chain length introduces a tunnelling distance and the end group modulate the energy level alignment at the mp-TiO2 and perovskite interface. The work function of these SAM-modified mp-TiO2 varied from −3.89 eV to −4.61 eV, with that of the pristine mp-TiO2 at −4.19 eV. A correlation of charge extraction and transport with respect to the modification was attempted. The study serves as a guide to engineer ETL interfaces with simple SAMs to improve the charge extraction, carrier balance and device long term stability. In this study, a maximum PCE of ~16.09% with insignificant hysteresis was obtained, which is 17% higher than the standard device.

Keywords: Energy level alignment, Interface engineering, Perovskite solar cells, Phosphonic acid monolayer, Tunnelling distance

Procedia PDF Downloads 106
7211 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 73
7210 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction

Authors: Yumeng Ma, Fang Wang, Jinxia Huang

Abstract:

Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.

Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base

Procedia PDF Downloads 144
7209 Synthesis and Functionalization of Gold Nanostars for ROS Production

Authors: H. D. Duong, J. I. Rhee

Abstract:

In this work, gold nanoparticles in star shape (called gold nanostars, GNS) were synthesized and coated by N-(3-aminopropyl) methacrylamide hydrochloride (PA) and mercaptopropionic acid (MPA) for functionalizing their surface by amine and carboxyl groups and then investigated for ROS production. The GNS with big size and multi-tips seem to be superior in singlet oxygen production as compared with that of small GNS and less tips. However, the functioned GNS in small size could also enhance efficiency of singlet oxygen production about double as compared with that of the intact GNS. In combination with methylene blue (MB+), the functioned GNS could enhance the singlet oxygen production of MB+ after 1h of LED750 irradiation and no difference between small size and big size in this reaction was observed. In combination with 5-aminolevulinic acid (ALA), only GNS coated PA could enhance the singlet oxygen production of ALA and the small size of GNS coated PA was a little higher effect than that of the bigger size. However, GNS coated MPA with small size had strong effect on hydroxyl radical production of ALA.

Keywords: 5-aminolevulinic acid, gold nanostars, methylene blue, ROS production

Procedia PDF Downloads 328
7208 Resume Ranking Using Custom Word2vec and Rule-Based Natural Language Processing Techniques

Authors: Subodh Chandra Shakya, Rajendra Sapkota, Aakash Tamang, Shushant Pudasaini, Sujan Adhikari, Sajjan Adhikari

Abstract:

Lots of efforts have been made in order to measure the semantic similarity between the text corpora in the documents. Techniques have been evolved to measure the similarity of two documents. One such state-of-art technique in the field of Natural Language Processing (NLP) is word to vector models, which converts the words into their word-embedding and measures the similarity between the vectors. We found this to be quite useful for the task of resume ranking. So, this research paper is the implementation of the word2vec model along with other Natural Language Processing techniques in order to rank the resumes for the particular job description so as to automate the process of hiring. The research paper proposes the system and the findings that were made during the process of building the system.

Keywords: chunking, document similarity, information extraction, natural language processing, word2vec, word embedding

Procedia PDF Downloads 134