Search results for: corpus luteum
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 377

Search results for: corpus luteum

227 Comparison of Verb Complementation Patterns in Selected Pakistani and British English Newspaper Social Columns: A Corpus-Based Study

Authors: Zafar Iqbal Bhatti

Abstract:

The present research aims to examine and evaluate the frequencies and practices of verb complementation patterns in English newspaper social columns published in Pakistan and Britain. The research will demonstrate that Pakistani English is a non-native variety of English having its own unique usual and logical characteristics, affected by way of the native languages and the culture, upon syntactic levels, making the variety users aware that any differences from British or American English that are systematic and regular, or another English language, are not even if they are unique, erroneous forms and typical characteristics of several kinds. The objectives are to examine the verb complementation patterns that British and Pakistani social columnists use in relation to their syntactic categories. Secondly, to compare the verb complementation patterns used in Pakistani and British English newspapers social columns. This study will figure out various verb complementation patterns in Pakistani and British English newspaper social columns and their occurrence and distribution. The word classes express different functions of words, such as action, event, or state of being. This research aims to evaluate whether there are any appreciable differences in the verb complementation patterns used in Pakistani and British English newspaper social columns. The results will show the number of varieties of verb complementation patterns in selected English newspapers social columns. This study will fill the gap of previous studies conducted in this field as they only explore a little about the differences between Pakistani and British English newspapers. It will also figure out a variety of languages used in Pakistani and British English journals, as well as regional and cultural values and variations. The researcher will use AntConc software in this study to extract the data for analysis. The researcher will use a concordance tool to identify verb complementation patterns in selected data. Then the researcher will manually categorize them because the same type of adverb can sometimes be used for various purposes. From 1st June 2022 to 30th Sep. 2022, a four-month written corpus of the social columns of PE and BE newspapers will be collected and analyzed. For the analysis of the research questions, 50 social columns will be selected from Pakistani newspapers and 50 from British newspapers. The researcher will collect a representative sample of data from Pakistani and British English newspaper social columns. The researcher will manually analyze the complementation patterns of each verb in each sentence, and then the researcher will determine how frequently each pattern occurs. The researcher will use syntactic characteristics of the verb complementation elements according to the description by Downing and Locke (2006). The researcher will examine all of the verb complementation patterns in the data, and the frequency and distribution of each verb complementation pattern will be evaluated using the software. The researcher will explore every possible verb complementation pattern in Pakistani and British English before calculating the occurrence and abundance of each verb pattern. The researcher will explore every possible verb complementation pattern in Pakistani English before calculating the frequency and distribution of each pattern.

Keywords: verb complementation, syntactic categories, newspaper social columns, corpus

Procedia PDF Downloads 47
226 A Critical Discourse Analysis of the Construction of Artists' Reputation by Online Art Magazines

Authors: Thomas Soro, Tim Stott, Brendan O'Rourke

Abstract:

The construction of artistic reputation has been examined within sociology, philosophy, and economics but, baring a few noteworthy exceptions its discursive aspect has been largely ignored. This is particularly surprising given that contemporary artworks primarily rely on discourse to construct their ontological status. This paper contributes a discourse analytical perspective to the broad body of literature on artistic reputation by providing an understanding of how it is discursively constructed within the institutional context of online contemporary art magazines. This paper uses corpora compiled from the websites of e-flux and ARTnews, two leading online contemporary art magazines, to examine how these organisations discursively construct the reputation of artists. By constructing word-sketches of the term 'Artist', the paper identified the most significant modifiers attributed to artists and the most significant verbs which have 'artist' as an object or subject. The most significant results were analysed through concordances and demonstrated a somewhat surprising lack of evaluative representation. To examine this feature more closely, the paper then analysed three announcement texts from e-flux’s site and three review texts from ARTnews' site, comparing the use of modifiers and verbs in the representation of artists, artworks, and institutions. The results of this analysis support the corpus findings, suggesting that artists are rarely represented in evaluative terms. Based on the relatively high frequency of evaluation in the representation of artworks and institutions, these results suggest that there may be discursive norms at work in the field of online contemporary art magazines which regulate the use of verbs and modifiers in the evaluation of artists.

Keywords: contemporary art, corpus linguistics, critical discourse analysis, symbolic capital

Procedia PDF Downloads 161
225 Culturable Microbial Diversity and Adaptation Strategy in the Jutulsessen and Ahlmannryggen of Western Dronning Maud Land, Antarctica

Authors: Shiv Mohan Singh, Gwyneth Matcher

Abstract:

To understand the culturable microbial composition and diversity patterns, soil samples were collected from inland nunataks of Jutulsessen and Ahlmannryggen ranges in Dronning Maud Land, Antarctica. 16S rRNA, ITS and the D1/D2 domain sequencing techniques were used for characterization of microbial communities of these geographical areas. The total 37 species of bacteria such as Arthrobacter agilis, Acinetobacter baumannii, Arthrobacter flavus, Arthrobacter ginsengisoli, Arthrobacter oxydans, Arthrobacter oryzae, Arthrobacter polychromogenes, Arthrobacter sulfonivorans, Bacillus altitudinis, Bacillus cereus, Bacillus paramycoides, Brevundimonas vesicularis, Brachybacterium rhamnosum, Curtobacterium luteum, Dermacoccus nishinomiyaensis, Dietzia aerolata, Janibacter indicus, Knoellia subterranean, Kocuria palustris, Kytococcus aerolatus, Lysinibacillus sphaericus, Microbacterium phyllosphaerae, Micrococcus yunnanensis, Methylobacterium rhodesianum, Moraxella osloensis, Paracoccus acridae, Pontibacter amylolyticus, Pseudomonas hunanensis, Pseudarthrobacter siccitolerans, Pseudarthrobacter phenanthrenivorans, Rhodococcus aerolatus, Rhodococcus sovatensis, Sphingomonas daechungensis, Sphingomonas sanguinis, Stenotrophomonas pavanii, Staphylococcus gallinarum, Staphylococcus arlettae and 9 species of fungi such as Candida davisiana, Cosmospora arxii, Geomyces destructans, Lecanicillium muscarium, Memnoniella humicola, Paecilomyces lilacinus, Pseudogymnoascus verrucosus, Phaeophlebiopsis ignerii and Thyronectria caraganae were recorded. Fatty acid methyl esters (FAME) analyses of representative species of each genus have shown predominance branched and unsaturated fatty acids indicate its adaptation strategy in Antarctic cold environment. To the best of our knowledge, this is the first record of culturable bacterial communities from Jutulsessen and Ahlmannryggen ranges in Western Dronning Maud Land, Antarctica.

Keywords: antarctica, microbe, adaptation, polar

Procedia PDF Downloads 79
224 The Universal Cultural Associations in the Conceptual Metaphors Used in the Headlines of Arab News and Saudi Gazette Newspapers: A Critical Cognitive Study

Authors: Hind Hassan Arruwaite

Abstract:

Conceptual metaphor is a cognitive semantic tool that provides access to people's conceptual systems. The correlation in the human conceptual system surpasses limited time and specific cultures. The universal associations provide universal schemas that organize people's conceptualization of the world. The study aims to explore how the cultural associations used in conceptual metaphors create commonalities and harmony between people of the world. In the research methodology, the researcher implemented Critical Metaphor Analysis, Metaphor Candidate Identification and Metaphor Identification Procedure models to deliver qualitative and descriptive findings. The semantic tension was the key criterion in identifying metaphorically used words in the headlines. The research materials are the oil trade conceptual metaphors used in the headlines of Arab News and Saudi Gazette Newspapers. The data will be uploaded to the self-constructed corpus to examine electronic lists for identifying conceptual metaphors. The study investigates the types of conceptual metaphors used in the headlines of the newspapers, the cultural associations identified in the conceptual metaphors, and whether the identified cultural associations in conceptual metaphors create universal conceptual schemas. The study aligned with previous seminal works on conceptual metaphor theory in emphasizing the distinctive power of conceptual metaphors in exposing the cultural associations that unify people's perceptions. The correlation of people conceptualization provides universal schemas that involve elements of human sensorimotor experiences. The study contributes to exposing the shared cultural associations that ensure the commonality of all humankind's thinking mechanism.

Keywords: critical discourse analysis, critical metaphor analysis, conceptual metaphor theory, primary and specific metaphors, corpus-driven approach, universal associations, image schema, sensorimotor experience, oil trade

Procedia PDF Downloads 199
223 Analysis of Steles with Libyan Inscriptions of Grande Kabylia, Algeria

Authors: Samia Ait Ali Yahia

Abstract:

Several steles with Libyan inscriptions were discovered in Grande Kabylia (Algeria), but very few researchers were interested in these inscriptions. Our work is to list, if possible all these steles in order to do a descriptive study of the corpus. The steles analysis will be focused on the iconographic and epigraphic level and on the different forms of Libyan characters in order to highlight the alphabet used by the Grande Kabylia.

Keywords: epigraphy, stele, Libyan inscription, Grande Kabylia

Procedia PDF Downloads 210
222 Crossing the Interdisciplinary Border: A Multidimensional Linguistics Analysis of a Legislative Discourse

Authors: Manvender Kaur Sarjit Singh

Abstract:

There is a crucial mismatch between classroom written language tasks and real world written language requirements. Realizing the importance of reducing the gap between the professional needs of the legal practitioners and the higher learning institutions that offer the legislative education in Malaysia, it is deemed necessary to develop a framework that integrates real-life written communication with the teaching of content-based legislative discourse to future legal practitioners. By highlighting the actual needs of the legal practitioners in the country, the present teaching practices will be enhanced and aligned with the actual needs of the learners thus realizing the vision and aspirations of the Malaysian Education Blueprint 2013-2025 and Legal Profession Qualifying Board. The need to focus future education according to the actual needs of the learners can be realized by developing a teaching framework which is designed within the prospective requirements of its real-life context. This paper presents the steps taken to develop a specific teaching framework that fulfills the fundamental real-life context of the prospective legal practitioners. The teaching framework was developed based on real-life written communication from the legal profession in Malaysia, using the specific genre analysis approach which integrates a corpus-based approach and a structural linguistics analysis. This approach was adopted due to its fundamental nature of intensive exploration of the real-life written communication according to the established strategies used. The findings showed the use of specific moves and parts-of-speech by the legal practitioners, in order to prepare the selected genre. The teaching framework is hoped to enhance the teachings of content-based law courses offered at present in the higher learning institutions in Malaysia.

Keywords: linguistics analysis, corpus analysis, genre analysis, legislative discourse

Procedia PDF Downloads 381
221 A Pragmatic Approach of Memes Created in Relation to the COVID-19 Pandemic

Authors: Alexandra-Monica Toma

Abstract:

Internet memes are an element of computer mediated communication and an important part of online culture that combines text and image in order to generate meaning. This term coined by Richard Dawkings refers to more than a mere way to briefly communicate ideas or emotions, thus naming a complex and an intensely perpetuated phenomenon in the virtual environment. This paper approaches memes as a cultural artefact and a virtual trope that mirrors societal concerns and issues, and analyses the pragmatics of their use. Memes have to be analysed in series, usually relating to some image macros, which is proof of the interplay between imitation and creativity in the memes’ writing process. We believe that their potential to become viral relates to three key elements: adaptation to context, reference to a successful meme series, and humour (jokes, irony, sarcasm), with various pragmatic functions. The study also uses the concept of multimodality and stresses how the memes’ text interacts with the image, discussing three types of relations: symmetry, amplification, and contradiction. Moreover, the paper proves that memes could be employed as speech acts with illocutionary force, when the interaction between text and image is enriched through the connection to a specific situation. The features mentioned above are analysed in a corpus that consists of memes related to the COVID-19 pandemic. This corpus shows them to be highly adaptable to context, which helps build the feeling of connection and belonging in an otherwise tremendously fragmented world. Some of them are created based on well-known image macros, and their humour results from an intricate dialogue between texts and contexts. Memes created in relation to the COVID-19 pandemic can be considered speech acts and are often used as such, as proven in the paper. Consequently, this paper tackles the key features of memes, makes a thorough analysis of the memes sociocultural, linguistic, and situational context, and emphasizes their intertextuality, with special accent on their illocutionary potential.

Keywords: context, memes, multimodality, speech acts

Procedia PDF Downloads 197
220 Arabic Lexicon Learning to Analyze Sentiment in Microblogs

Authors: Mahmoud B. Rokaya

Abstract:

The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.

Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation

Procedia PDF Downloads 185
219 A Study of the Use of Arguments in Nominalizations as Instanciations of Grammatical Metaphors Finished in -TION in Academic Texts of Native Speakers

Authors: Giovana Perini-Loureiro

Abstract:

The purpose of this research was to identify whether the nominalizations terminating in -TION in the academic discourse of native English speakers contain the arguments required by their input verbs. In the perspective of functional linguistics, ideational metaphors, with nominalization as their most pervasive realization, are lexically dense, and therefore frequent in formal texts. Ideational metaphors allow the academic genre to instantiate objectification, de-personalization, and the ability to construct a chain of arguments. The valence of those nouns present in nominalizations tends to maintain the same elements of the valence from its original verbs, but these arguments are not always expressed. The initial hypothesis was that these arguments would also be present alongside the nominalizations, through anaphora or cataphora. In this study, a qualitative analysis of the occurrences of the five more frequent nominalized terminations in -TION in academic texts was accomplished, and thus a verification of the occurrences of the arguments required by the original verbs. The assembling of the concordance lines was done through COCA (Corpus of Contemporary American English). After identifying the five most frequent nominalizations (attention, action, participation, instruction, intervention), the concordance lines were selected at random to be analyzed, assuring the representativeness and reliability of the sample. It was possible to verify, in all the analyzed instances, the presence of arguments. In most instances, the arguments were not expressed, but recoverable, either in the context or in the shared knowledge among the interactants. It was concluded that the realizations of the arguments which were not expressed alongside the nominalizations are part of a continuum, starting from the immediate context with anaphora and cataphora; up to a knowledge shared outside the text, such as specific area knowledge. The study also has implications for the teaching of academic writing, especially with regards to the impact of nominalizations on the thematic and informational flow of the text. Grammatical metaphors are essential to academic writing, hence acknowledging the occurrence of its arguments is paramount to achieve linguistic awareness and the writing prestige required by the academy.

Keywords: corpus, functional linguistics, grammatical metaphors, nominalizations, academic English

Procedia PDF Downloads 145
218 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 330
217 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis

Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu

Abstract:

Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.

Keywords: GPT, phantom-less QCT, large language model, osteoporosis

Procedia PDF Downloads 70
216 Lexical Bundles in the Alexiad of Anna Comnena: Computational and Discourse Analysis Approach

Authors: Georgios Alexandropoulos

Abstract:

The purpose of this study is to examine the historical text of Alexiad by Anna Comnena using computational tools for the extraction of lexical bundles containing the name of her father, Alexius Comnenus. For this reason, in this research we apply corpus linguistics techniques for the automatic extraction of lexical bundles and through them we will draw conclusions about how these lexical bundles serve her support provided to her father.

Keywords: lexical bundles, computational literature, critical discourse analysis, Alexiad

Procedia PDF Downloads 621
215 An Interdisciplinary Approach to Investigating Style: A Case Study of a Chinese Translation of Gilbert’s (2006) Eat Pray Love

Authors: Elaine Y. L. Ng

Abstract:

Elizabeth Gilbert’s (2006) biography Eat, Pray, Love describes her travels to Italy, India, and Indonesia after a painful divorce. The author’s experiences with love, loss, search for happiness, and meaning have resonated with a huge readership. As regards the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese, it was first translated by a Taiwanese translator He Pei-Hua and published in Taiwan in 2007 by Make Boluo Wenhua Chubanshe with the fairly catching title “Enjoy! Traveling Alone.” The same translation was translocated to China, republished in simplified Chinese characters by Shanxi Shifan Daxue Chubanshe in 2008 and renamed in China, entitled “To Be a Girl for the Whole Life.” Later on, the same translation in simplified Chinese characters was reprinted by Hunan Wenyi Chubanshe in 2013. This study employs Munday’s (2002) systemic model for descriptive translation studies to investigate the translation of Gilbert’s (2006) Eat, Pray, Love into Chinese by the Taiwanese translator Hu Pei-Hua. It employs an interdisciplinary approach, combining systemic functional linguistics and corpus stylistics with sociohistorical research within a descriptive framework to study the translator’s discursive presence in the text. The research consists of three phases. The first phase is to locate the target text within its socio-cultural context. The target-text context concerning the para-texts, readers’ responses, and the publishers’ orientation will be explored. The second phase is to compare the source text and the target text for the categorization of translation shifts by using the methodological tools of systemic functional linguistics and corpus stylistics. The investigation concerns the rendering of mental clauses and speech and thought presentation. The final phase is an explanation of the causes of translation shifts. The linguistic findings are related to the extra-textual information collected in an effort to ascertain the motivations behind the translator’s choices. There exist sets of possible factors that may have contributed to shaping the textual features of the given translation within a specific socio-cultural context. The study finds that the translator generally reproduces the mental clauses and speech and thought presentation closely according to the original. Nevertheless, the language of the translation has been widely criticized to be unidiomatic and stiff, losing the elegance of the original. In addition, the several Chinese translations of the given text produced by one Taiwanese and two Chinese publishers are basically the same. They are repackaged slightly differently, mainly with the change of the book cover and its captions for each version. By relating the textual findings to the extra-textual data of the study, it is argued that the popularity of the Chinese translation of Gilbert’s (2006) Eat, Pray, Love may not be attributed to the quality of the translation. Instead, it may have to do with the way the work is promoted strategically by the social media manipulated by the four e-bookstores promoting and selling the book online in China.

Keywords: chinese translation of eat pray love, corpus stylistics, motivations for translation shifts, systemic approach to translation studies

Procedia PDF Downloads 173
214 Achieving Maximum Performance through the Practice of Entrepreneurial Ethics: Evidence from SMEs in Nigeria

Authors: S. B. Tende, H. L. Abubakar

Abstract:

It is acknowledged that small and medium enterprises (SMEs) may encounter different ethical issues and pressures that could affect the way in which they strategize or make decisions concerning the outcome of their business. Therefore, this research aimed at assessing entrepreneurial ethics in the business of SMEs in Nigeria. Secondary data were adopted as source of corpus for the analysis. The findings conclude that a sound entrepreneurial ethics system has a significant effect on the level of performance of SMEs in Nigeria. The Nigerian Government needs to provide both guiding and physical structures; as well as learning systems that could inculcate these entrepreneurial ethics.

Keywords: culture, entrepreneurial ethics, performance, SME

Procedia PDF Downloads 379
213 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 132
212 The Role and Effects of Communication on Occupational Safety: A Review

Authors: Pieter A. Cornelissen, Joris J. Van Hoof

Abstract:

The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).

Keywords: communication, feedback, occupational safety, review

Procedia PDF Downloads 300
211 English is Not Going to the Dog (E): Rising Fame of Doge Speak

Authors: Beata, Bury

Abstract:

Doge speak is an Internet variety with its own linguistic patterns and regularities. Doge meme contains some unconventional grammar rules which make it recognizable. With the use of doge corpus, certain characteristics of doge speak as well as reasons for its popularity are analyzed. The study concludes that doge memes can be applied to a variety of situations, for instance advertising or fashion industry. Doge users play with language and create surprising linguistic combinations. To sum up, doge meme making is a multiperson task. Doge users predict and comment on the world with the use of doge memes.

Keywords: dogespeak, internet language, language play, meme

Procedia PDF Downloads 477
210 A Comparative Analysis of Lexical Bundles in Academic Writing: Insights from Persian and Native English Writers in Applied Linguistics

Authors: Elham Shahrjooi Haghighi

Abstract:

This research explores how lexical bundles are utilized in writing in the field of linguistics by comparing professional Persian writers with native English writers using corpus-based studies and advanced computational techniques to examine the occurrence and characteristics of lexical bundles in academic writings. The review of literature emphasizes how important lexical bundles are, in organizing discussions and conveying opinions in both spoken and written language contexts across genres and proficiency levels in fields of study. Previous research has indicated that native English writers tend to employ an array and diversity of bundles than non-native writers do; these bundles are essential elements in academic writing. In this study’s methodology section, the research utilizes a corpus-based method to analyze a collection of writings such as research papers and advanced theses at the doctoral and masters’ levels. The examination uncovers variances in the utilization of groupings between writers who are native speakers of Persian and those who are native English speakers with the latter group displaying a greater occurrence and variety, in types of groupings. Furthermore, the research delves into how these groupings contribute to aspects classifying them into categories based on their relevance to research text structure and individuals as outlined in Hyland’s framework. The results show that Persian authors employ phrases and demonstrate distinct structural and functional tendencies in comparison to native English writers. This variation is linked to differing language skills, levels, disciplinary norms and cultural factors. The study also highlights the pedagogical implications of these findings, suggesting that targeted instruction on the use of lexical bundles could enhance the academic writing skills of non-native speakers. In conclusion, this research contributes to the understanding of lexical bundles in academic writing by providing a detailed comparative analysis of their use by Persian and native English writers. The insights from this study have important implications for language education and the development of effective writing strategies for non-native English speakers in academic contexts.

Keywords: lexical bundles, academic writing, comparative analysis, computational techniques

Procedia PDF Downloads 18
209 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 173
208 Software Architectural Design Ontology

Authors: Muhammad Irfan Marwat, Sadaqat Jan, Syed Zafar Ali Shah

Abstract:

Software architecture plays a key role in software development but absence of formal description of software architecture causes different impede in software development. To cope with these difficulties, ontology has been used as artifact. This paper proposes ontology for software architectural design based on IEEE model for architecture description and Kruchten 4+1 model for viewpoints classification. For categorization of style and views, ISO/IEC 42010 has been used. Corpus method has been used to evaluate ontology. The main aim of the proposed ontology is to classify and locate software architectural design information.

Keywords: semantic-based software architecture, software architecture, ontology, software engineering

Procedia PDF Downloads 543
207 Variation in Italian Specialized Economic Texts

Authors: Abdelmagid Basyouny Sakr

Abstract:

Terminological variation is a reality and it is now recognized by terminologists. This paper investigates the terminological variation in the context of specialized economic texts in Italian. It aims to find whether certain patterns or tendencies can be derived from the analysis of these texts. Term variants pose two different kinds of difficulties. The first one is being able to recognize linguistic expressions that denote the same concept in running text. Another one lies in knowing which variant should be considered and for what purpose. This would help to differentiate between variants that could be candidates for inclusion in terminological resources and the ones which are synonyms or contextual variants. New insights about terminological variation in specialized texts could contribute to improve specialized dictionaries which will better account for the different ways in which a given thought is expressed.

Keywords: corpus linguistics, specialized communication, terms and concepts, terminological variation

Procedia PDF Downloads 156
206 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 240
205 Wavelets Contribution on Textual Data Analysis

Authors: Habiba Ben Abdessalem

Abstract:

The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.

Keywords: textual data, wavelet, denoising, contingency table

Procedia PDF Downloads 276
204 Modeling False Statements in Texts

Authors: Francielle A. Vargas, Thiago A. S. Pardo

Abstract:

According to the standard philosophical definition, lying is saying something that you believe to be false with the intent to deceive. For deception detection, the FBI trains its agents in a technique named statement analysis, which attempts to detect deception based on parts of speech (i.e., linguistics style). This method is employed in interrogations, where the suspects are first asked to make a written statement. In this poster, we model false statements using linguistics style. In order to achieve this, we methodically analyze linguistic features in a corpus of fake news in the Portuguese language. The results show that they present substantial lexical, syntactic and semantic variations, as well as punctuation and emotion distinctions.

Keywords: deception detection, linguistics style, computational linguistics, natural language processing

Procedia PDF Downloads 217
203 Converse to the Sherman Inequality with Applications in Information Theory

Authors: Ana Barbir, S. Ivelic Bradanovic, D. Pecaric, J. Pecaric

Abstract:

We proved a converse to Sherman's inequality. Using the concept of f-divergence we obtained some inequalities for the well-known entropies, such as Shannon entropies that have many applications in many applied sciences, for example, in information theory, biology and economics Zipf-Mandelbrot law gave improvement in account for the low-rankwords in corpus. Applications of Zipf-Mandelbrot law can be found in linguistics, information sciences and also mostly applicable in ecological eld studies. We also introduced an entropy by applying the Zipf-Mandelbrot law and derived some related inequalities.

Keywords: f-divergence, majorization inequality, Sherman inequality, Zipf-Mandelbrot entropy

Procedia PDF Downloads 166
202 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation

Authors: Benoît Leclercq, Ilse Depraetere

Abstract:

The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.

Keywords: Actualisation, Modality, Pragmatics, Semantics

Procedia PDF Downloads 127
201 Part of Speech Tagging Using Statistical Approach for Nepali Text

Authors: Archit Yajnik

Abstract:

Part of Speech Tagging has always been a challenging task in the era of Natural Language Processing. This article presents POS tagging for Nepali text using Hidden Markov Model and Viterbi algorithm. From the Nepali text, annotated corpus training and testing data set are randomly separated. Both methods are employed on the data sets. Viterbi algorithm is found to be computationally faster and accurate as compared to HMM. The accuracy of 95.43% is achieved using Viterbi algorithm. Error analysis where the mismatches took place is elaborately discussed.

Keywords: hidden markov model, natural language processing, POS tagging, viterbi algorithm

Procedia PDF Downloads 325
200 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 168
199 How Is a Machine-Translated Literary Text Organized in Coherence? An Analysis Based upon Theme-Rheme Structure

Authors: Jiang Niu, Yue Jiang

Abstract:

With the ultimate goal to automatically generate translated texts with high quality, machine translation has made tremendous improvements. However, its translations of literary works are still plagued with problems in coherence, esp. the translation between distant language pairs. One of the causes of the problems is probably the lack of linguistic knowledge to be incorporated into the training of machine translation systems. In order to enable readers to better understand the problems of machine translation in coherence, to seek out the potential knowledge to be incorporated, and thus to improve the quality of machine translation products, this study applies Theme-Rheme structure to examine how a machine-translated literary text is organized and developed in terms of coherence. Theme-Rheme structure in Systemic Functional Linguistics is a useful tool for analysis of textual coherence. Theme is the departure point of a clause and Rheme is the rest of the clause. In a text, as Themes and Rhemes may be connected with each other in meaning, they form thematic and rhematic progressions throughout the text. Based on this structure, we can look into how a text is organized and developed in terms of coherence. Methodologically, we chose Chinese and English as the language pair to be studied. Specifically, we built a comparable corpus with two modes of English translations, viz. machine translation (MT) and human translation (HT) of one Chinese literary source text. The translated texts were annotated with Themes, Rhemes and their progressions throughout the texts. The annotated texts were analyzed from two respects, the different types of Themes functioning differently in achieving coherence, and the different types of thematic and rhematic progressions functioning differently in constructing texts. By analyzing and contrasting the two modes of translations, it is found that compared with the HT, 1) the MT features “pseudo-coherence”, with lots of ill-connected fragments of information using “and”; 2) the MT system produces a static and less interconnected text that reads like a list; these two points, in turn, lead to the less coherent organization and development of the MT than that of the HT; 3) novel to traditional and previous studies, Rhemes do contribute to textual connection and coherence though less than Themes do and thus are worthy of notice in further studies. Hence, the findings suggest that Theme-Rheme structure be applied to measuring and assessing the coherence of machine translation, to being incorporated into the training of the machine translation system, and Rheme be taken into account when studying the textual coherence of both MT and HT.

Keywords: coherence, corpus-based, literary translation, machine translation, Theme-Rheme structure

Procedia PDF Downloads 201
198 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 105