Search results for: functional annotation analysis
29707 BingleSeq: A User-Friendly R Package for Single-Cell RNA-Seq Data Analysis
Authors: Quan Gu, Daniel Dimitrov
Abstract:
BingleSeq was developed as a shiny-based, intuitive, and comprehensive application that enables the analysis of single-Cell RNA-Sequencing count data. This was achieved via incorporating three state-of-the-art software packages for each type of RNA sequencing analysis, alongside functional annotation analysis and a way to assess the overlap of differential expression method results. At its current state, the functionality implemented within BingleSeq is comparable to that of other applications, also developed with the purpose of lowering the entry requirements to RNA Sequencing analyses. BingleSeq is available on GitHub and will be submitted to R/Bioconductor.Keywords: bioinformatics, functional annotation analysis, single-cell RNA-sequencing, transcriptomics
Procedia PDF Downloads 20129706 Fuzzy Semantic Annotation of Web Resources
Authors: Sahar Maâlej Dammak, Anis Jedidi, Rafik Bouaziz
Abstract:
With the great mass of pages managed through the world, and especially with the advent of the Web, their manual annotation is impossible. We focus, in this paper, on the semiautomatic annotation of the web pages. We propose an approach and a framework for semantic annotation of web pages entitled “Querying Web”. Our solution is an enhancement of the first result of annotation done by the “Semantic Radar” Plug-in on the web resources, by annotations using an enriched domain ontology. The concepts of the result of Semantic Radar may be connected to several terms of the ontology, but connections may be uncertain. We represent annotations as possibility distributions. We use the hierarchy defined in the ontology to compute degrees of possibilities. We want to achieve an automation of the fuzzy semantic annotation of web resources.Keywords: fuzzy semantic annotation, semantic web, domain ontologies, querying web
Procedia PDF Downloads 37329705 A Method of the Semantic on Image Auto-Annotation
Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou
Abstract:
Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.Keywords: image auto-annotation, color correlograms, Hash code, image retrieval
Procedia PDF Downloads 49529704 Towards a Large Scale Deep Semantically Analyzed Corpus for Arabic: Annotation and Evaluation
Authors: S. Alansary, M. Nagi
Abstract:
This paper presents an approach of conducting semantic annotation of Arabic corpus using the Universal Networking Language (UNL) framework. UNL is intended to be a promising strategy for providing a large collection of semantically annotated texts with formal, deep semantics rather than shallow. The result would constitute a semantic resource (semantic graphs) that is editable and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles and rhetorical relations, into a single semantic formalism for knowledge representation. The paper will also present the Interactive Analysis tool for automatic semantic annotation (IAN). In addition, the cornerstone of the proposed methodology which are the disambiguation and transformation rules, will be presented. Semantic annotation using UNL has been applied to a corpus of 20,000 Arabic sentences representing the most frequent structures in the Arabic Wikipedia. The representation, at different linguistic levels was illustrated starting from the morphological level passing through the syntactic level till the semantic representation is reached. The output has been evaluated using the F-measure. It is 90% accurate. This demonstrates how powerful the formal environment is, as it enables intelligent text processing and search.Keywords: semantic analysis, semantic annotation, Arabic, universal networking language
Procedia PDF Downloads 58029703 Estimation of Functional Response Model by Supervised Functional Principal Component Analysis
Authors: Hyon I. Paek, Sang Rim Kim, Hyon A. Ryu
Abstract:
In functional linear regression, one typical problem is to reduce dimension. Compared with multivariate linear regression, functional linear regression is regarded as an infinite-dimensional case, and the main task is to reduce dimensions of functional response and functional predictors. One common approach is to adapt functional principal component analysis (FPCA) on functional predictors and then use a few leading functional principal components (FPC) to predict the functional model. The leading FPCs estimated by the typical FPCA explain a major variation of the functional predictor, but these leading FPCs may not be mostly correlated with the functional response, so they may not be significant in the prediction for response. In this paper, we propose a supervised functional principal component analysis method for a functional response model with FPCs obtained by considering the correlation of the functional response. Our method would have a better prediction accuracy than the typical FPCA method.Keywords: supervised, functional principal component analysis, functional response, functional linear regression
Procedia PDF Downloads 7329702 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions
Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri
Abstract:
Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics
Procedia PDF Downloads 18429701 Genome-Wide Functional Analysis of Phosphatase in Cryptococcus neoformans
Authors: Jae-Hyung Jin, Kyung-Tae Lee, Yee-Seul So, Eunji Jeong, Yeonseon Lee, Dongpil Lee, Dong-Gi Lee, Yong-Sun Bahn
Abstract:
Cryptococcus neoformans causes cryptococcal meningoencephalitis mainly in immunocompromised patients as well as immunocompetent people. But therapeutic options are limited to treat cryptococcosis. Some signaling pathways including cyclic AMP pathway, MAPK pathway, and calcineurin pathway play a central role in the regulation of the growth, differentiation, and virulence of C. neoformans. To understand signaling networks regulating the virulence of C. neoformans, we selected the 114 putative phosphatase genes, one of the major components of signaling networks, in the genome of C. neoformans. We identified putative phosphatases based on annotation in C. neoformans var. grubii genome database provided by the Broad Institute and National Center for Biotechnology Information (NCBI) and performed a BLAST search of phosphatases of Saccharomyces cerevisiae, Aspergillus nidulans, Candida albicans and Fusarium graminearum to Cryptococcus neoformans. We classified putative phosphatases into 14 groups based on InterPro phosphatase domain annotation. Here, we constructed 170 signature-tagged gene-deletion strains through homologous recombination methods for 91 putative phosphatases. We examined their phenotypic traits under 30 different in vitro conditions, including growth, differentiation, stress response, antifungal resistance and virulence-factor production.Keywords: human fungal pathogen, phosphatase, deletion library, functional genomics
Procedia PDF Downloads 36329700 Contextual Sentiment Analysis with Untrained Annotators
Authors: Lucas A. Silva, Carla R. Aguiar
Abstract:
This work presents a proposal to perform contextual sentiment analysis using a supervised learning algorithm and disregarding the extensive training of annotators. To achieve this goal, a web platform was developed to perform the entire procedure outlined in this paper. The main contribution of the pipeline described in this article is to simplify and automate the annotation process through a system of analysis of congruence between the notes. This ensured satisfactory results even without using specialized annotators in the context of the research, avoiding the generation of biased training data for the classifiers. For this, a case study was conducted in a blog of entrepreneurship. The experimental results were consistent with the literature related annotation using formalized process with experts.Keywords: sentiment analysis, untrained annotators, naive bayes, entrepreneurship, contextualized classifier
Procedia PDF Downloads 39429699 Annotation Ontology for Semantic Web Development
Authors: Hadeel Al Obaidy, Amani Al Heela
Abstract:
The main purpose of this paper is to examine the concept of semantic web and the role that ontology and semantic annotation plays in the development of semantic web services. The paper focuses on semantic web infrastructure illustrating how ontology and annotation work to provide the learning capabilities for building content semantically. To improve productivity and quality of software, the paper applies approaches, notations and techniques offered by software engineering. It proposes a conceptual model to develop semantic web services for the infrastructure of web information retrieval system of digital libraries. The developed system uses ontology and annotation to build a knowledge based system to define and link the meaning of a web content to retrieve information for users’ queries. The results are more relevant through keywords and ontology rule expansion that will be more accurate to satisfy the requested information. The level of results accuracy would be enhanced since the query semantically analyzed work with the conceptual architecture of the proposed system.Keywords: semantic web services, software engineering, semantic library, knowledge representation, ontology
Procedia PDF Downloads 17129698 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example
Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang
Abstract:
Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.Keywords: cancer, visualization, database, functional annotation
Procedia PDF Downloads 61529697 Systematic Review of Functional Analysis in Brazil
Authors: Felipe Magalhaes Lemos
Abstract:
Functional behavior analysis is a procedure that has been studied for several decades by behavior analysts. In Brazil, we still have few studies in the area, so it was decided to carry out a systematic review of the articles published in the area by Brazilians. A search was done on the following scientific article registration sites: PsycINFO, ERIC, ISI Web of Science, Virtual Health Library. The research includes (a) peer-reviewed studies that (b) have been carried out in Brazil containing (c) functional assessment as a pre-treatment through (d) experimental procedures, direct or indirect observation and measurement of behavior problems (e) demonstrating a relationship between environmental events and behavior. During the review, 234 papers were found; however, only 9 were included in the final analysis. Of the 9 articles extracted, only 2 presented functional analysis procedures with manipulation of environmental variables, while the other 7 presented different procedures for a descriptive behavior assessment. Only the two studies using "functional analysis" used graphs to demonstrate the prevalent function of the behavior. Other studies described procedures and did not make clear the causal relationship between environment and behavior. There is still confusion in Brazil regarding the terms "functional analysis", "descriptive assessment" and "contingency analysis," which are generally treated in the same way. This study shows that few articles are published with a focus on functional analysis in Brazil.Keywords: behavior, contingency, descriptive assessment, functional analysis
Procedia PDF Downloads 14229696 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 58429695 The Omani Learner of English Corpus: Source and Tools
Authors: Anood Al-Shibli
Abstract:
Designing a learner corpus is not an easy task to accomplish because dealing with learners’ language has many variables which might affect the results of any study based on learners’ language production (spoken and written). Also, it is very essential to systematically design a learner corpus especially when it is aimed to be a reference to language research. Therefore, designing the Omani Learner Corpus (OLEC) has undergone many explicit and systematic considerations. These criteria can be regarded as the foundation to design any learner corpus to be exploited effectively in language use and language learning studies. Added to that, OLEC is manually error-annotated corpus. Error-annotation in learner corpora is very essential; however, it is time-consuming and prone to errors. Consequently, a navigating tool is designed to help the annotators to insert errors’ codes in order to make the error-annotation process more efficient and consistent. To assure accuracy, error annotation procedure is followed to annotate OLEC and some preliminary findings are noted. One of the main results of this procedure is creating an error-annotation system based on the Omani learners of English language production. Because OLEC is still in the first stages, the primary findings are related to only one level of proficiency and one error type which is verb related errors. It is found that Omani learners in OLEC has the tendency to have more errors in forming the verb and followed by problems in agreement of verb. Comparing the results to other error-based studies indicate that the Omani learners tend to have basic verb errors which can found in lower-level of proficiency. To this end, it is essential to admit that examining learners’ errors can give insights to language acquisition and language learning and most errors do not happen randomly but they occur systematically among language learners.Keywords: error-annotation system, error-annotation manual, learner corpora, verbs related errors
Procedia PDF Downloads 13929694 The Automatisation of Dictionary-Based Annotation in a Parallel Corpus of Old English
Authors: Ana Elvira Ojanguren Lopez, Javier Martin Arista
Abstract:
The aims of this paper are to present the automatisation procedure adopted in the implementation of a parallel corpus of Old English, as well as, to assess the progress of automatisation with respect to tagging, annotation, and lemmatisation. The corpus consists of an aligned parallel text with word-for-word comparison Old English-English that provides the Old English segment with inflectional form tagging (gloss, lemma, category, and inflection) and lemma annotation (spelling, meaning, inflectional class, paradigm, word-formation and secondary sources). This parallel corpus is intended to fill a gap in the field of Old English, in which no parallel and/or lemmatised corpora are available, while the average amount of corpus annotation is low. With this background, this presentation has two main parts. The first part, which focuses on tagging and annotation, selects the layouts and fields of lexical databases that are relevant for these tasks. Most information used for the annotation of the corpus can be retrieved from the lexical and morphological database Nerthus and the database of secondary sources Freya. These are the sources of linguistic and metalinguistic information that will be used for the annotation of the lemmas of the corpus, including morphological and semantic aspects as well as the references to the secondary sources that deal with the lemmas in question. Although substantially adapted and re-interpreted, the lemmatised part of these databases draws on the standard dictionaries of Old English, including The Student's Dictionary of Anglo-Saxon, An Anglo-Saxon Dictionary, and A Concise Anglo-Saxon Dictionary. The second part of this paper deals with lemmatisation. It presents the lemmatiser Norna, which has been implemented on Filemaker software. It is based on a concordance and an index to the Dictionary of Old English Corpus, which comprises around three thousand texts and three million words. In its present state, the lemmatiser Norna can assign lemma to around 80% of textual forms on an automatic basis, by searching the index and the concordance for prefixes, stems and inflectional endings. The conclusions of this presentation insist on the limits of the automatisation of dictionary-based annotation in a parallel corpus. While the tagging and annotation are largely automatic even at the present stage, the automatisation of alignment is pending for future research. Lemmatisation and morphological tagging are expected to be fully automatic in the near future, once the database of secondary sources Freya and the lemmatiser Norna have been completed.Keywords: corpus linguistics, historical linguistics, old English, parallel corpus
Procedia PDF Downloads 21029693 De Novo Assembly and Characterization of the Transcriptome during Seed Development, and Generation of Genic-SSR Markers in Pomegranate (Punica granatum L.)
Authors: Ozhan Simsek, Dicle Donmez, Burhanettin Imrak, Ahsen Isik Ozguven, Yildiz Aka Kacar
Abstract:
Pomegranate (Punica granatum L.) is known to be one of the oldest edible fruit tree species, with a wide geographical global distribution. Fruits from the two defined varieties (Hicaznar and 33N26) were taken at intervals after pollination and fertilization at different sizes. Seed samples were used for transcriptome sequencing. Primary sequencing was produced by Illumina Hi-Seq™ 2000. Firstly, we had raw reads, and it was subjected to quality control (QC). Raw reads were filtered into clean reads and aligned to the reference sequences. De novo analysis was performed to detect genes expressed in seeds of pomegranate varieties. We performed downstream analysis to determine differentially expressed genes. We generated about 27.09 gb bases in total after Illumina Hi-Seq sequencing. All samples were assembled together, we got 59,264 Unigenes, the total length, average length, N50, and GC content of Unigenes are 84.547.276 bp, 1.426 bp, 2,137 bp, and 46.20 %, respectively. Unigenes were annotated with 7 functional databases, finally, 42.681(NR: 72.02%), 39.660 (NT: 66.92%), 30.790 (Swissprot: 51.95%), 20.212 (COG: 34.11%), 27.689 (KEGG: 46.72%), 12.328 (GO: 20.80%), and 33,833 (Interpro: 57.09%) Unigenes were annotated. With functional annotation results, we detected 42.376 CDS, and 4.999 SSR distribute on 16.143 Unigenes.Keywords: next generation sequencing, SSR, RNA-Seq, Illumina
Procedia PDF Downloads 23929692 Computational Pipeline for Lynch Syndrome Detection: Integrating Alignment, Variant Calling, and Annotations
Authors: Rofida Gamal, Mostafa Mohammed, Mariam Adel, Marwa Gamal, Marwa kamal, Ayat Saber, Maha Mamdouh, Amira Emad, Mai Ramadan
Abstract:
Lynch Syndrome is an inherited genetic condition associated with an increased risk of colorectal and other cancers. Detecting Lynch Syndrome in individuals is crucial for early intervention and preventive measures. This study proposes a computational pipeline for Lynch Syndrome detection by integrating alignment, variant calling, and annotation. The pipeline leverages popular tools such as FastQC, Trimmomatic, BWA, bcftools, and ANNOVAR to process the input FASTQ file, perform quality trimming, align reads to the reference genome, call variants, and annotate them. It is believed that the computational pipeline was applied to a dataset of Lynch Syndrome cases, and its performance was evaluated. It is believed that the quality check step ensured the integrity of the sequencing data, while the trimming process is thought to have removed low-quality bases and adaptors. In the alignment step, it is believed that the reads were accurately mapped to the reference genome, and the subsequent variant calling step is believed to have identified potential genetic variants. The annotation step is believed to have provided functional insights into the detected variants, including their effects on known Lynch Syndrome-associated genes. The results obtained from the pipeline revealed Lynch Syndrome-related positions in the genome, providing valuable information for further investigation and clinical decision-making. The pipeline's effectiveness was demonstrated through its ability to streamline the analysis workflow and identify potential genetic markers associated with Lynch Syndrome. It is believed that the computational pipeline presents a comprehensive and efficient approach to Lynch Syndrome detection, contributing to early diagnosis and intervention. The modularity and flexibility of the pipeline are believed to enable customization and adaptation to various datasets and research settings. Further optimization and validation are believed to be necessary to enhance performance and applicability across diverse populations.Keywords: Lynch Syndrome, computational pipeline, alignment, variant calling, annotation, genetic markers
Procedia PDF Downloads 7229691 Quantitative Analysis of the Functional Characteristics of Urban Complexes Based on Station-City Integration: Fifteen Case Studies of European, North American, and East Asian Railway Stations
Authors: Dai Yizheng, Chen-Yang Zhang
Abstract:
As station-city integration has been widely accepted as a strategy for mixed-use development, a quantitative analysis of the functional characteristics of urban complexes based on station-city integration is urgently needed. Taking 15 railway stations in European, North American, and East Asian cities as the research objects, this study analyzes their functional proportion, functional positioning, and functional correlation with respect to four categories of functional facilities for both railway passenger flow and subway passenger flow. We found that (1) the functional proportion of urban complexes was mainly concentrated in three models: complementary, dominant, and equilibrium. (2) The mathematical model affected by the functional proportion was created to evaluate the functional positioning of an urban complex at three scales: station area, city, and region. (3) The strength of the correlation between the functional area and passenger flow was revealed via data analysis using Pearson’s correlation coefficient. Finally, the findings of this study provide a valuable reference for research on similar topics in other countries that are developing station-city integration.Keywords: urban complex, station-city integration, mixed-use, function, quantitative analysis
Procedia PDF Downloads 11329690 Extraction of Text Subtitles in Multimedia Systems
Authors: Amarjit Singh
Abstract:
In this paper, a method for extraction of text subtitles in large video is proposed. The video data needs to be annotated for many multimedia applications. Text is incorporated in digital video for the motive of providing useful information about that video. So need arises to detect text present in video to understanding and video indexing. This is achieved in two steps. First step is text localization and the second step is text verification. The method of text detection can be extended to text recognition which finds applications in automatic video indexing; video annotation and content based video retrieval. The method has been tested on various types of videos.Keywords: video, subtitles, extraction, annotation, frames
Procedia PDF Downloads 59929689 VideoAssist: A Labelling Assistant to Increase Efficiency in Annotating Video-Based Fire Dataset Using a Foundation Model
Authors: Keyur Joshi, Philip Dietrich, Tjark Windisch, Markus König
Abstract:
In the field of surveillance-based fire detection, the volume of incoming data is increasing rapidly. However, the labeling of a large industrial dataset is costly due to the high annotation costs associated with current state-of-the-art methods, which often require bounding boxes or segmentation masks for model training. This paper introduces VideoAssist, a video annotation solution that utilizes a video-based foundation model to annotate entire videos with minimal effort, requiring the labeling of bounding boxes for only a few keyframes. To the best of our knowledge, VideoAssist is the first method to significantly reduce the effort required for labeling fire detection videos. The approach offers bounding box and segmentation annotations for the video dataset with minimal manual effort. Results demonstrate that the performance of labels annotated by VideoAssist is comparable to those annotated by humans, indicating the potential applicability of this approach in fire detection scenarios.Keywords: fire detection, label annotation, foundation models, object detection, segmentation
Procedia PDF Downloads 529688 Comparison of Rumen Microbial Analysis Pipelines Based on 16s rRNA Gene Sequencing
Authors: Xiaoxing Ye
Abstract:
To investigate complex rumen microbial communities, 16S ribosomal RNA (rRNA) sequencing is widely used. Here, we evaluated the impact of bioinformatics pipelines on the observation of OTUs and taxonomic classification of 750 cattle rumen microbial samples by comparing three commonly used pipelines (LotuS, UPARSE, and QIIME) with Usearch. In LotuS-based analyses, 189 archaeal and 3894 bacterial OTUs were observed. The observed OTUs for the Usearch analysis were significantly larger than the LotuS results. We discovered 1495 OTUs for archaea and 92665 OTUs for bacteria using Usearch analysis. In addition, taxonomic assignments were made for the rumen microbial samples. All pipelines had consistent taxonomic annotations from the phylum to the genus level. A difference in relative abundance was calculated for all microbial levels, including Bacteroidetes (QIIME: 72.2%, Usearch: 74.09%), Firmicutes (QIIME: 18.3%, Usearch: 20.20%) for the bacterial phylum, Methanobacteriales (QIIME: 64.2%, Usearch: 45.7%) for the archaeal class, Methanobacteriaceae (QIIME: 35%, Usearch: 45.7%) and Methanomassiliicoccaceae (QIIME: 35%, Usearch: 31.13%) for archaeal family. However, the most prevalent archaeal class varied between these two annotation pipelines. The Thermoplasmata was the top class according to the QIIME annotation, whereas Methanobacteria was the top class according to Usearch.Keywords: cattle rumen, rumen microbial, 16S rRNA gene sequencing, bioinformatics pipeline
Procedia PDF Downloads 8629687 Attitude in Academic Writing (CAAW): Corpus Compilation and Annotation
Authors: Hortènsia Curell, Ana Fernández-Montraveta
Abstract:
This paper presents the creation, development, and analysis of a corpus designed to study the presence of attitude markers and author’s stance in research articles in two different areas of linguistics (theoretical linguistics and sociolinguistics). These two disciplines are expected to behave differently in this respect, given the disparity in their discursive conventions. Attitude markers in this work are understood as the linguistic elements (adjectives, nouns and verbs) used to convey the writer's stance towards the content presented in the article, and are crucial in understanding writer-reader interaction and the writer's position. These attitude markers are divided into three broad classes: assessment, significance, and emotion. In addition to them, we also consider first-person singular and plural pronouns and possessives, modal verbs, and passive constructions, which are other linguistic elements expressing the author’s stance. The corpus, Corpus of Attitude in Academic Writing (CAAW), comprises a collection of 21 articles, collected from six journals indexed in JCR. These articles were originally written in English by a single native-speaker author from the UK or USA and were published between 2022 and 2023. The total number of words in the corpus is approximately 222,400, with 106,422 from theoretical linguistics (Lingua, Linguistic Inquiry and Journal of Linguistics) and 116,022 from sociolinguistics journals (International Journal of the Sociology of Language, Language in Society and Journal of Sociolinguistics). Together with the corpus, we present the tool created for the creation and storage of the corpus, along with a tool for automatic annotation. The steps followed in the compilation of the corpus are as follows. First, the articles were selected according to the parameters explained above. Second, they were downloaded and converted to txt format. Finally, examples, direct quotes, section titles and references were eliminated, since they do not involve the author’s stance. The resulting texts were the input for the annotation of the linguistic features related to stance. As for the annotation, two articles (one from each subdiscipline) were annotated manually by the two researchers. An existing list was used as a baseline, and other attitude markers were identified, together with the other elements mentioned above. Once a consensus was reached, the rest of articles were annotated automatically using the tool created for this purpose. The annotated corpus will serve as a resource for scholars working in discourse analysis (both in linguistics and communication) and related fields, since it offers new insights into the expression of attitude. The tools created for the compilation and annotation of the corpus will be useful to study author’s attitude and stance in articles from any academic discipline: new data can be uploaded and the list of markers can be enlarged. Finally, the tool can be expanded to other languages, which will allow cross-linguistic studies of author’s stance.Keywords: academic writing, attitude, corpus, english
Procedia PDF Downloads 7229686 Spatial Resilience of the Ageing Population in the Romanian Functional Urban Areas
Authors: Marinela Istrate, Ionel Muntele, Alexandru Bănică
Abstract:
The authors propose the identification, analysis and prognosis of the quantitative and qualitative evolution of the elderly population in the functional urban areas. The present paper takes into account the analysis of some representative indicators (the weight of the elderly population, ageing index, dynamic index of economic ageing of productive population etc.) and the elaboration of an integrated indicator that would help differentiate the population ageing forms in the 48 functional urban areas that were defined based on demographic and social-economic criteria for all large and medium cities in Romania.Keywords: ageing, demographic transition, functional urban areas, spatial resilience
Procedia PDF Downloads 35029685 PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs
Authors: Renukswamy Chikkamath, Vishvapalsinhji Ramsinh Parmar, Christoph Hewel, Markus Endres
Abstract:
Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning.Keywords: machine learning, patents, patent sentiment analysis, patent information retrieval
Procedia PDF Downloads 8829684 A Framework for Secure Information Flow Analysis in Web Applications
Authors: Ralph Adaimy, Wassim El-Hajj, Ghassen Ben Brahim, Hazem Hajj, Haidar Safa
Abstract:
Huge amounts of data and personal information are being sent to and retrieved from web applications on daily basis. Every application has its own confidentiality and integrity policies. Violating these policies can have broad negative impact on the involved company’s financial status, while enforcing them is very hard even for the developers with good security background. In this paper, we propose a framework that enforces security-by-construction in web applications. Minimal developer effort is required, in a sense that the developer only needs to annotate database attributes by a security class. The web application code is then converted into an intermediary representation, called Extended Program Dependence Graph (EPDG). Using the EPDG, the provided annotations are propagated to the application code and run against generic security enforcement rules that were carefully designed to detect insecure information flows as early as they occur. As a result, any violation in the data’s confidentiality or integrity policies is reported. As a proof of concept, two PHP web applications, Hotel Reservation and Auction, were used for testing and validation. The proposed system was able to catch all the existing insecure information flows at their source. Moreover and to highlight the simplicity of the suggested approaches vs. existing approaches, two professional web developers assessed the annotation tasks needed in the presented case studies and provided a very positive feedback on the simplicity of the annotation task.Keywords: web applications security, secure information flow, program dependence graph, database annotation
Procedia PDF Downloads 46929683 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems
Authors: Nermin Sökmen
Abstract:
An effort estimation model is needed for software-intensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.Keywords: functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis
Procedia PDF Downloads 29129682 Next Generation Sequencing Analysis of Circulating MiRNAs in Rheumatoid Arthritis and Osteoarthritis
Authors: Khalda Amr, Noha Eltaweel, Sherif Ismail, Hala Raslan
Abstract:
Introduction: Osteoarthritis is the most common form of arthritis that involves the wearing away of the cartilage that caps the bones in the joints. While rheumatoid arthritis is an autoimmune disease in which the immune system attacks the joints, beginning with the lining of joints. In this study, we aimed to study the top deregulated miRNAs that might be the cause of pathogenesis in both diseases. Methods: Eight cases were recruited in this study: 4 rheumatoid arthritis (RA), 2 osteoarthritis (OA) patients, as well as 2 healthy controls. Total RNA was isolated from plasma to be subjected to miRNA profiling by NGS. Sequencing libraries were constructed and generated using the NEBNextR UltraTM small RNA Sample Prep Kit for Illumina R (NEB, USA), according to the manufacturer’s instructions. The quality of samples were checked using fastqc and multiQC. Results were compared RA vs Controls and OA vs. Controls. Target gene prediction and functional annotation of the deregulated miRNAs were done using Mienturnet. The top deregulated miRNAs in each disease were selected for further validation using qRT-PCR. Results: The average number of sequencing reads per sample exceeded 2.2 million, of which approximately 57% were mapped to the human reference genome. The top DEMs in RA vs controls were miR-6724-5p, miR-1469, miR-194-3p (up), miR-1468-5p, miR-486-3p (down). In comparison, the top DEMs in OA vs controls were miR-1908-3p, miR-122b-3p, miR-3960 (up), miR-1468-5p, miR-15b-3p (down). The functional enrichment of the selected top deregulated miRNAs revealed the highly enriched KEGG pathways and GO terms. Six of the deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) had multiple target genes in the RA pathway, so they are more likely to affect the RA pathogenesis. Conclusion: Six of our studied deregulated miRNAs (miR-15b, -128, -194, -328, -542 and -3180) might be highly involved in the disease pathogenesis. Further functional studies are crucial to assess their functions and actual target genes.Keywords: next generation sequencing, mirnas, rheumatoid arthritis, osteoarthritis
Procedia PDF Downloads 9329681 Existence Theory for First Order Functional Random Differential Equations
Authors: Rajkumar N. Ingle
Abstract:
In this paper, the existence of a solution of nonlinear functional random differential equations of the first order is proved under caratheodory condition. The study of the functional random differential equation has got importance in the random analysis of the dynamical systems of universal phenomena. Objectives: Nonlinear functional random differential equation is useful to the scientists, engineers, and mathematicians, who are engaged in N.F.R.D.E. analyzing a universal random phenomenon, govern by nonlinear random initial value problems of D.E. Applications of this in the theory of diffusion or heat conduction. Methodology: Using the concepts of probability theory, functional analysis, generally the existence theorems for the nonlinear F.R.D.E. are prove by using some tools such as fixed point theorem. The significance of the study: Our contribution will be the generalization of some well-known results in the theory of Nonlinear F.R.D.E.s. Further, it seems that our study will be useful to scientist, engineers, economists and mathematicians in their endeavors to analyses the nonlinear random problems of the universe in a better way.Keywords: Random Fixed Point Theorem, functional random differential equation, N.F.R.D.E., universal random phenomenon
Procedia PDF Downloads 50029680 Functional Dyspepsia and Irritable Bowel Syndrome: Life sketches of Functional Illnesses (Non-Organic) in West Bengal, India
Authors: Urmita Chakraborty
Abstract:
To start with, Organic Illnesses are no longer considered as only health difficulties. Functional Illnesses that are emotional in origin have become the search areas in many investigations. In the present study, an attempt has made to study the psychological nature of Functional Gastro-Intestinal Disorders (FGID) in West Bengal. In the specialty of Gastroenterology, the medically unexplained symptom-based conditions are known as Functional Gastrointestinal Disorder (FGID). In the present study, Functional Dyspepsia (FD) and Irritable Bowel Syndrome (IBS) have been taken for investigations. 72 cases have been discussed in this context. Results of the investigation have been analyzed in terms of a qualitative framework. Theoretical concepts on persistent thoughts and behaviors will be delineated in the analysis. Processes of self-categorization will be implemented too. Aspects of Attachments and controlling of affect as well as meta-cognitive appraisals are further considered for the depiction.Keywords: functional dyspepsia, irritable bowel syndrome, self-categorization
Procedia PDF Downloads 56429679 Upgrading along Value Chains: Strategies for Thailand's Functional Milk Industry
Authors: Panisa Harnpathananun
Abstract:
This paper is 'Practical Experience Analysis' which aims to analyze critical obstacles hampering the growth of the functional milk industry and suggest recommendations to overcome those obstacles. Using the Sectoral Innovation System (SIS) along value chain analysis, it is found that restriction in regulation of milk disinfection process, difficulty of dairy entrepreneurs for health claim approval of functional food and beverage and lack of intermediary between entrepreneurs and certified units for certification of functional foods and milk are major causes that needed to be resolved. Consequently, policy recommendations are proposed to tackle the problems occurring throughout the value chain. For the upstream, a collaborative platform using the quadruple helix model is proposed in a pattern of effective dairy cooperatives. For the midstream, regulation issues of new process, extended shelf life (ESL) milk, or prolonged milk are necessary, which can be extended the global market opportunity. For the downstream, mechanism of intermediary between entrepreneurs and certified units can be assisted in certified process of functional milk, especially a process of 'health claim' approval.Keywords: Thailand, functional milk, supply chain, quadruple helix, intermediary, functional food
Procedia PDF Downloads 14729678 Saudi Twitter Corpus for Sentiment Analysis
Authors: Adel Assiri, Ahmed Emam, Hmood Al-Dossari
Abstract:
Sentiment analysis (SA) has received growing attention in Arabic language research. However, few studies have yet to directly apply SA to Arabic due to lack of a publicly available dataset for this language. This paper partially bridges this gap due to its focus on one of the Arabic dialects which is the Saudi dialect. This paper presents annotated data set of 4700 for Saudi dialect sentiment analysis with (K= 0.807). Our next work is to extend this corpus and creation a large-scale lexicon for Saudi dialect from the corpus.Keywords: Arabic, sentiment analysis, Twitter, annotation
Procedia PDF Downloads 628