Search results for: English as the default language
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4530

Search results for: English as the default language

1890 The Phonology and Phonetics of Second Language Intonation in Case of “Downstep”

Authors: Tayebeh Norouzi

Abstract:

This study aims to investigate the acquisition process of intonation. It examines the intonation structure of Tokyo Japanese and its realization by Iranian learners of Japanese. Seven Iranian learners of Japanese, differing in fluency, and two Japanese speakers participated in the experiment. Two sentences were used to test the phonological and phonetic characteristics of lexical pitch-accent as well as the intonation patterns produced by the speakers. Both sentences consisted of similar words with the same number of syllables and lexical pitch-accents but different syntactic structure. Speakers were asked to read each sentence three times at normal speed, and the data were analyzed by Praat. The results show that lexical pitch-accent, Accentual Phrase (AP) and AP boundary tone realization vary depending on sentence type. For sentences of type XdeYwo, the lexical pitch-accent is realized properly. However, there is a rise in AP boundary tone regardless of speakers’ level of fluency. In contrast, in sentences of type XnoYwo, the lexical pitch-accent and AP boundary tone vary depending on the speakers’ fluency level. Advanced speakers are better at grouping words into phrases and produce more native-like intonation patterns, though they are not able to realize downstep properly. The non-native speakers tried to realize proper intonation patterns by making changes in lexical accent and boundary tone.

Keywords: intonation, Iranian learners, Japanese prosody, lexical accent, second language acquisition.

Procedia PDF Downloads 169
1889 Investigating Classroom Teachers' Perceptions of Assessing U.S. College Students' L2 Chinese Oral Performance

Authors: Guangyan Chen

Abstract:

This study examined Chinese teachers’ perceptions of assessing U.S. college students’ L2 (second language) Chinese oral performances at different levels. Ten oral performances were videotaped from which three were chosen as samples to represent three different proficiency levels based on professionals’ judgments according to the ACTFL proficiency guidelines. The three samples were shown to L2 Chinese teachers who completed questionnaires about their assessments for each speech sample. In total, 104 L2 Chinese teachers responded to each of the three samples. The Exploratory Factor Analyses (EFA) of the teachers’ responses revealed three similar rating criteria patterns for assessing the three levels of oral performances. The teachers’ responses to Samples 2 and 3 revealed five rating criteria: Global proficiency, Chinese conceptual framework, content richness, communication appropriateness, and communication clarity. The teachers’ responses to Sample 1 revealed four rating criteria: global proficiency, Chinese conceptual framework, communication appropriateness/content richness, and communication clarity. However, the analyses of variance (ANOVAs) revealed that the proficiency levels of the three oral performances differed significantly across all rating criteria. Therefore, the data suggests that L2 classroom teachers could use the similar rating criteria pattern to assess college-level L2 Chinese students’ oral performances at different proficiency levels.

Keywords: language assessment, L2 Chinese, oral performance, rating criteria

Procedia PDF Downloads 539
1888 Discursive Construction of Strike in the Media Coverage of Academic Staff Union of Universities vs Federal Government of Nigeria Industrial Conflict of 2013

Authors: Samuel Alaba Akinwotu

Abstract:

Over the years, Nigeria’s educational system has greatly suffered from the menace of industrial conflict. The smooth running of the nation’s public educational institutions has been hampered by incessant strikes embarked upon by workers of these institutions. Even though industrial conflicts in Nigeria have enjoyed wide reportage in the media, there has been a dearth of critical examination of the language use that index the conflict’s discourse in the media. This study which is driven by a combination of Critical Discourse Analysis (CDA) and Conceptual Metaphor (CM) examines the discursive and ideological features of language indexing the industrial conflict between the Academic Staff Union of Universities (ASUU) and the Federal Government of Nigeria (FGN) in 2013. It aims to identify and assess the conceptual and cognitive motivations of the stances expressed by the parties and the public and the role of the media in the management and resolution of the conflict. For data, media reports and readers’ comments were purposively sampled from six print and online news sources (The Punch, This Day, Vanguard, The Nation, Osun Defender and AITonline) published between July and December 2013. The study provides further insight into industrial conflict and proves to be useful for the management and resolution of industrial conflicts especially in our public educational institutions.

Keywords: industrial conflict, critical discourse analysis, conceptual metaphor, federal government of Nigeria, academic staff union of universities

Procedia PDF Downloads 142
1887 The Efficacy of Clobazam for Landau-Kleffner Syndrome

Authors: Nino Gogatishvili, Davit Kvernadze, Giorgi Japharidze

Abstract:

Background and aims: Landau Kleffner syndrome (LKS) is a rare disorder with epileptic seizures and acquired aphasia. It usually starts in initially healthy children. The first symptoms are language regression and behavioral disturbances, and the sleep EEG reveals abnormal epileptiform activity. The aim was to discuss the efficacy of Clobazam for Landau Kleffner syndrome. Case report: We report a case of an 11-year-old boy with an uneventful pregnancy and delivery. He began to walk at 11 months and speak with simple phrases at the age of 2,5 years. At the age of 18 months, he had febrile convulsions; at the age of 5 years, the parents noticed language regression, stuttering, and serious behavioral dysfunction, including hyperactivity, temper outbursts. The epileptic seizure was not noticed. MRI was without any abnormality. Neuropsychological testing revealed verbal auditory agnosia. Sleep EEG showed abundant left fronto-temporal spikes, reaching over 85% during non-rapid eye movement sleep (non-REM sleep). Treatment was started with Clobazam. After ten weeks, EEG was improved. Stuttering and behavior also improved. Results: Since the start of Clobazam treatment, stuttering and behavior improved. Now, he is 11 years old, without antiseizure medication. Sleep EEG shows fronto-temporal spikes on the left side, over 10-49 % of non-REM sleep, bioccipital spikes, and slow-wave discharges and spike-waves. Conclusions: This case provides further support for the efficacy of Clobazam in patients with LKS.

Keywords: Landau-Kleffner syndrome, antiseizure medication, stuttering, aphasia

Procedia PDF Downloads 66
1886 Artificial Intelligence in Duolingo

Authors: Jwana Khateeb, Lamar Bawazeer, Hayat Sharbatly, Mozoun Alghamdi

Abstract:

This research paper explores the idea of learning new languages through an innovative-mobile based learning technology. Throughout this paper we will discuss and examine a mobile-based application called Duolingo. Duolingo is a college standard application for learning foreign languages such as Spanish and English. It is a smart application where it uses smart adaptive technologies to advance the level of their students at each period of time by offering new tasks. Furthermore, we will discuss the history of the application and the methodology used within it. We have conducted a study in which we surveyed ten people about their experience using Duolingo. The results are examined and analyzed in which it indicates the effectiveness on Duolingo students who are seeking to learn new languages. Thus, the research paper will furthermore discuss the diverse methods and approaches in learning new languages through this mobile-based application.

Keywords: Duolingo, AI, personalized, customized

Procedia PDF Downloads 289
1885 Relationship of Macro-Concepts in Educational Technologies

Authors: L. R. Valencia Pérez, A. Morita Alexander, Peña A. Juan Manuel, A. Lamadrid Álvarez

Abstract:

This research shows the reflection and identification of explanatory variables and their relationships between different variables that are involved with educational technology, all of them encompassed in macro-concepts which are: cognitive inequality, economy, food and language; These will give the guideline to have a more detailed knowledge of educational systems, the communication and equipment, the physical space and the teachers; All of them interacting with each other give rise to what is called educational technology management. These elements contribute to have a very specific knowledge of the equipment of communications, networks and computer equipment, systems and content repositories. This is intended to establish the importance of knowing a global environment in the transfer of knowledge in poor countries, so that it does not diminish the capacity to be authentic and preserve their cultures, their languages or dialects, their hierarchies and real needs; In short, to respect the customs of different towns, villages or cities that are intended to be reached through the use of internationally agreed professional educational technologies. The methodology used in this research is the analytical - descriptive, which allows to explain each of the variables, which in our opinion must be taken into account, in order to achieve an optimal incorporation of the educational technology in a model that gives results in a medium term. The idea is that in an encompassing way the concepts will be integrated to others with greater coverage until reaching macro concepts that are of national coverage in the countries and that are elements of conciliation in the different federal and international reforms. At the center of the model is the educational technology which is directly related to the concepts that are contained in factors such as the educational system, communication and equipment, spaces and teachers, which are globally immersed in macro concepts Cognitive inequality, economics, food and language. One of the major contributions of this article is to leave this idea under an algorithm that allows to be as unbiased as possible when evaluating this indicator, since other indicators that are to be taken from international preference entities like the OECD in the area of education systems studied, so that they are not influenced by particular political or interest pressures. This work opens the way for a relationship between involved entities, both conceptual, procedural and human activity, to clearly identify the convergence of their impact on the problem of education and how the relationship can contribute to an improvement, but also shows possibilities of being able to reach a comprehensive education reform for all.

Keywords: relationships macro-concepts, cognitive inequality, economics, alimentation and language

Procedia PDF Downloads 199
1884 Information Technology Approaches to Literature Text Analysis

Authors: Ayse Tarhan, Mustafa Ilkan, Mohammad Karimzadeh

Abstract:

Science was considered as part of philosophy in ancient Greece. By the nineteenth century, it was understood that philosophy was very inclusive and that social and human sciences such as literature, history, and psychology should be separated and perceived as an autonomous branch of science. The computer was also first seen as a tool of mathematical science. Over time, computer science has grown by encompassing every area in which technology exists, and its growth compelled the division of computer science into different disciplines, just as philosophy had been divided into different branches of science. Now there is almost no branch of science in which computers are not used. One of the newer autonomous disciplines of computer science is digital humanities, and one of the areas of digital humanities is literature. The material of literature is words, and thanks to the software tools created using computer programming languages, data that a literature researcher would need months to complete, can be achieved quickly and objectively. In this article, three different tools that literary researchers can use in their work will be introduced. These studies were created with the computer programming languages Python and R and brought to the world of literature. The purpose of introducing the aforementioned studies is to set an example for the development of special tools or programs on Ottoman language and literature in the future and to support such initiatives. The first example to be introduced is the Stylometry tool developed with the R language. The other is The Metrical Tool, which is used to measure data in poems and was developed with Python. The latest literature analysis tool in this article is Voyant Tools, which is a multifunctional and easy-to-use tool.

Keywords: DH, literature, information technologies, stylometry, the metrical tool, voyant tools

Procedia PDF Downloads 151
1883 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 167
1882 A Framework for Chinese Domain-Specific Distant Supervised Named Entity Recognition

Authors: Qin Long, Li Xiaoge

Abstract:

The Knowledge Graphs have now become a new form of knowledge representation. However, there is no consensus in regard to a plausible and definition of entities and relationships in the domain-specific knowledge graph. Further, in conjunction with several limitations and deficiencies, various domain-specific entities and relationships recognition approaches are far from perfect. Specifically, named entity recognition in Chinese domain is a critical task for the natural language process applications. However, a bottleneck problem with Chinese named entity recognition in new domains is the lack of annotated data. To address this challenge, a domain distant supervised named entity recognition framework is proposed. The framework is divided into two stages: first, the distant supervised corpus is generated based on the entity linking model of graph attention neural network; secondly, the generated corpus is trained as the input of the distant supervised named entity recognition model to train to obtain named entities. The link model is verified in the ccks2019 entity link corpus, and the F1 value is 2% higher than that of the benchmark method. The re-pre-trained BERT language model is added to the benchmark method, and the results show that it is more suitable for distant supervised named entity recognition tasks. Finally, it is applied in the computer field, and the results show that this framework can obtain domain named entities.

Keywords: distant named entity recognition, entity linking, knowledge graph, graph attention neural network

Procedia PDF Downloads 95
1881 Application Reliability Method for the Analysis of the Stability Limit States of Large Concrete Dams

Authors: Mustapha Kamel Mihoubi, Essadik Kerkar, Abdelhamid Hebbouche

Abstract:

According to the randomness of most of the factors affecting the stability of a gravity dam, probability theory is generally used to TESTING the risk of failure and there is a confusing logical transition from the state of stability failed state, so the stability failure process is considered as a probable event. The control of risk of product failures is of capital importance for the control from a cross analysis of the gravity of the consequences and effects of the probability of occurrence of identified major accidents and can incur a significant risk to the concrete dam structures. Probabilistic risk analysis models are used to provide a better understanding the reliability and structural failure of the works, including when calculating stability of large structures to a major risk in the event of an accident or breakdown. This work is interested in the study of the probability of failure of concrete dams through the application of the reliability analysis methods including the methods used in engineering. It is in our case of the use of level II methods via the study limit state. Hence, the probability of product failures is estimated by analytical methods of the type FORM (First Order Reliability Method), SORM (Second Order Reliability Method). By way of comparison, a second level III method was used which generates a full analysis of the problem and involving an integration of the probability density function of, random variables are extended to the field of security by using of the method of Mont-Carlo simulations. Taking into account the change in stress following load combinations: normal, exceptional and extreme the acting on the dam, calculation results obtained have provided acceptable failure probability values which largely corroborate the theory, in fact, the probability of failure tends to increase with increasing load intensities thus causing a significant decrease in strength, especially in the presence of combinations of unique and extreme loads. Shear forces then induce a shift threatens the reliability of the structure by intolerable values of the probability of product failures. Especially, in case THE increase of uplift in a hypothetical default of the drainage system.

Keywords: dam, failure, limit state, monte-carlo, reliability, probability, sliding, Taylor

Procedia PDF Downloads 318
1880 Analysis of the Development of Communicative Skills After Participating in the Equine-Assisted-Therapy Program Step-By-Step in Communication

Authors: Leticia Souza Guirra, Márcia Eduarda Vieira Ramos, Edlaine Souza Pereira, Leticia Correa Celeste

Abstract:

Introduction: Studies indicate that equine-assisted therapy enables improvements in several areas of functioning that are impaired in children with autism spectrum disorder (ASD), such as social interaction and communication. Objective: The study proposes to analyze the development of dialogic skills of a verbal child with ASD after participating in the equine-assisted therapy Step By Step in Communication. Method: This is quantitative and qualitative research through a case study. It refers to a 6 years old child diagnosed with ASD belonging to a group of practitioners of the Brazilian National Equine-Assited-Therapy Association. The Behavioral Observation Protocol (PROC) was used to evaluate communicative skills before and after the intervention, which consisted of 24 sessions once a week. Results: All conversational skills increased their frequency, with participation in dialogue and initiation of interaction. The child also increases the habit of waiting for his turn and answering the interlocutor. The emission of topics not related to conversation and echolalia showed a significant decrease after the intervention. Conclusion: The studied child showed improvement in communicative skills after participating in the equine-assisted therapy Step By Step in Communication. Contributions: This study contributes to a greater understanding of the impact of equine-assisted therapy on the communicative abilities of children with ASD.

Keywords: equine-assisted-therapy, autism spectrum disorder, language, communication, language and hearing sciences

Procedia PDF Downloads 81
1879 The Analysis of a Learning Media Prototype as Web Learning in Distance Education

Authors: Yudi Efendi, Hasanuddin

Abstract:

Web-based learning program is the complementary of Printed Teaching Material (BMP) that serves and helps students clarify the parts that require additional explanation or illustration. This research attempts to analyze a prototype of web-based learning program. A prototype of web-based learning program which is interactive is completed with exercises and formative tests. Using qualitative descriptive method, the research presents the analysis from the content expert and media expert. Besides, the interviews from tutors of Political and Social Sciences will be presented. The research also analyzes questionnaires from the students of English and literature program in Jakarta. The questionnaire deals with the display of the content, the audio video, the usability, and the navigation. In the long run, it is expected that the program could be recommended to use by the university as an ideal program.

Keywords: web learning, prototype, content expert, media expert

Procedia PDF Downloads 247
1878 Parvi̇z Jabrail's Novel 'in Foreign Language': Delimitation of Postmodernism with Modernism

Authors: Nargiz Ismayilova

Abstract:

The issue of modernism and the concept of postmodernism has been the focus of world researchers for many years, and there are very few researchers who have come to a common denominator about this term. During the independence period, the expansion of the relations of Azerbaijani literature with the world has led to the spread of many currents and tendencies formed in the West to the literary environment in our country. In this context, the works created in our environment are distinguished by their extreme richness in terms of subject matter and diversity in terms of genre. As an interesting example of contemporary postmodern prose in Azerbaijan, Parviz Jabrayil's novel "In a Foreign Language" pays attention with its more different plotline. The disagreement exists among the critics about the novel. Some are looking for high artistry in work; others are satisfied with the elements of postmodernism in work. Delimitation of the border between modernism and postmodernism can serve to carry out a deep scientific study of the novel. The novel depicts the world in the author's consciousness against the background of water shortage (thirst) in the Old City (Icharishahar). The author deconstructs today's Ichari Shahar mould. Along with modernism, elements of postmodernism occupy a large place in the work. When we look at the general tendencies of postmodernist art, we see that science and individuality are questioned, criticizing the sharp boundaries of modernism and the negativity of these restrictions, and modernism offers alternatives to artistic production by identifying its negatives and shortcomings in the areas of artistic freedom. The novel is extremely interesting in this point of view.

Keywords: concept of postmodernism, modernism, delimitation, political postmodernism, modern postmodern prose, Azerbaijani literature, novel, comparison, world literature, analysis

Procedia PDF Downloads 137
1877 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 366
1876 Variables, Annotation, and Metadata Schemas for Early Modern Greek

Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara

Abstract:

Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.

Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.

Procedia PDF Downloads 67
1875 Production of Oral Vowels by Chinese Learners of Portuguese: Problems and Didactic Implications

Authors: Adelina Castelo

Abstract:

The increasing number of learners of Portuguese as Foreign Language in China justifies the need to define the phonetic profile of these learners and to design didactic materials that are adjusted to their specific problems in pronunciation. Different aspects of this topic have been studied, but the production of oral vowels still needs to be investigated. This study aims: (i) to identify the problems the Chinese learners of Portuguese experience in the pronunciation of oral vowels; (ii) to discuss the didactic implications drawn from those problems. The participants were eight native speakers of Mandarin Chinese that had been learning Portuguese in College for almost a year. They named pictured objects and their oral productions were recorded and phonetically transcribed. The selection of the objects to name took into account some linguistic variables (e.g. stress pattern, syllable structure, presence of the Portuguese oral vowels in different word positions according to stress location). The results are analysed in two ways: the impact of linguistic variables on the success rate in the vowels' production; the replacement strategies used in the non-target productions. Both analyses show that the Chinese learners of Portuguese (i) have significantly more difficulties with the mid vowels as well as the high central vowel and (ii) do not master the vowel height feature. These findings contribute to define the phonetic profile of these learners in terms of oral vowel production. Besides, they have important didactic implications for the pronunciation teaching to these specific learners. Those implications are discussed and exemplified.

Keywords: Chinese learners, learners’ phonetic profile, linguistic variables, Portuguese as foreign language, production data, pronunciation teaching, oral vowels

Procedia PDF Downloads 223
1874 Navigating Complex Communication Dynamics in Qualitative Research

Authors: Kimberly M. Cacciato, Steven J. Singer, Allison R. Shapiro, Julianna F. Kamenakis

Abstract:

This study examines the dynamics of communication among researchers and participants who have various levels of hearing, use multiple languages, have various disabilities, and who come from different social strata. This qualitative methodological study focuses on the strategies employed in an ethnographic research study examining the communication choices of six sets of parents who have Deaf-Disabled children. The participating families varied in their communication strategies and preferences including the use of American Sign Language (ASL), visual-gestural communication, multiple spoken languages, and pidgin forms of each of these. The research team consisted of two undergraduate students proficient in ASL and a Deaf principal investigator (PI) who uses ASL and speech as his main modes of communication. A third Hard-of-Hearing undergraduate student fluent in ASL served as an objective facilitator of the data analysis. The team created reflexive journals by audio recording, free writing, and responding to team-generated prompts. They discussed interactions between the members of the research team, their evolving relationships, and various social and linguistic power differentials. The researchers reflected on communication during data collection, their experiences with one another, and their experiences with the participating families. Reflexive journals totaled over 150 pages. The outside research assistant reviewed the journals and developed follow up open-ended questions and prods to further enrich the data. The PI and outside research assistant used NVivo qualitative research software to conduct open inductive coding of the data. They chunked the data individually into broad categories through multiple readings and recognized recurring concepts. They compared their categories, discussed them, and decided which they would develop. The researchers continued to read, reduce, and define the categories until they were able to develop themes from the data. The research team found that the various communication backgrounds and skills present greatly influenced the dynamics between the members of the research team and with the participants of the study. Specifically, the following themes emerged: (1) students as communication facilitators and interpreters as barriers to natural interaction, (2) varied language use simultaneously complicated and enriched data collection, and (3) ASL proficiency and professional position resulted in a social hierarchy among researchers and participants. In the discussion, the researchers reflected on their backgrounds and internal biases of analyzing the data found and how social norms or expectations affected the perceptions of the researchers in writing their journals. Through this study, the research team found that communication and language skills require significant consideration when working with multiple and complex communication modes. The researchers had to continually assess and adjust their data collection methods to meet the communication needs of the team members and participants. In doing so, the researchers aimed to create an accessible research setting that yielded rich data but learned that this often required compromises from one or more of the research constituents.

Keywords: American Sign Language, complex communication, deaf-disabled, methodology

Procedia PDF Downloads 118
1873 Reverse Engineering Genius: Through the Lens of World Language Collaborations

Authors: Cynthia Briggs, Kimberly Gerardi

Abstract:

Over the past six years, the authors have been working together on World Language Collaborations in the Middle School French Program at St. Luke's School in New Canaan, Connecticut, USA. Author 2 brings design expertise to the projects, and both teachers have utilized the fabrication lab, emerging technologies, and collaboration with students. Each year, author 1 proposes a project scope, and her students are challenged to design and engineer a signature project. Both partners have improved the iterative process to ensure deeper learning and sustained student inquiry. The projects range from a 1:32 scale model of the Eiffel Tower that was CNC routed to a fully functional jukebox that plays francophone music, lights up, and can hold up to one thousand songs powered by Raspberry Pi. The most recent project is a Fragrance Marketplace, culminating with a pop-up store for the entire community to discover. Each student will learn the history of fragrance and the chemistry behind making essential oils. Students then create a unique brand, marketing strategy, and concept for their signature fragrance. They are further tasked to use the industrial design process (bottling, packaging, and creating a brand name) to finalize their product for the public Marketplace. Sometimes, these dynamic projects require maintenance and updates. For example, our wall-mounted, three-foot francophone clock is constantly changing. The most recent iteration uses Chat GPT to program the Arduino to reconcile the real-time clock shield and keep perfect time as each hour passes. The lights, motors, and sounds from the clock are authentic to each region, represented with laser-cut embellishments. Inspired by Michel Parmigiani, the history of Swiss watch-making, and the precision of time instruments, we aim for perfection with each passing minute. The authors aim to share exemplary work that is possible with students of all ages. We implemented the reverse engineering process to focus on student outcomes to refine our collaborative process. The products that our students create are prime examples of how the design engineering process is applicable across disciplines. The authors firmly believe that the past and present of World cultures inspire innovation.

Keywords: collaboration, design thinking, emerging technologies, world language

Procedia PDF Downloads 43
1872 The Digital Divide: Examining the Use and Access to E-Health Based Technologies by Millennials and Older Adults

Authors: Delana Theiventhiran, Wally J. Bartfay

Abstract:

Background and Significance: As the Internet is becoming the epitome of modern communications, there are many pragmatic reasons why the digital divide matters in terms of accessing and using E-health based technologies. With the rise of technology usage globally, those in the older adult generation may not be as familiar and comfortable with technology usage and are thus put at a disadvantage compared to other generations such as millennials when examining and using E-health based platforms and technology. Currently, little is known about how older adults and millennials access and use e-health based technologies. Methods: A systemic review of the literature was undertaken employing the following three databases: (i) PubMed, (ii) ERIC, and (iii) CINAHL; employing the search term 'digital divide and generations' to identify potential articles. To extract required data from the studies, a data abstraction tool was created to obtain the following information: (a) author, (b) year of publication, (c) sample size, (d) country of origin, (e) design/methods, (f) major findings/outcomes obtained. Inclusion criteria included publication dates between the years of Jan 2009 to Aug 2018, written in the English language, target populations of older adults aged 65 and above and millennials, and peer reviewed quantitative studies only. Major Findings: PubMed provided 505 potential articles, where 23 of those articles met the inclusion criteria. Specifically, ERIC provided 53 potential articles, where no articles met criteria following data extraction. CINAHL provided 14 potential articles, where eight articles met criteria following data extraction. Conclusion: Practically speaking, identifying how newer E-health based technologies can be integrated into society and identifying why there is a gap with digital technology will help reduce the impact on generations and individuals who are not as familiar with technology and Internet usage. The largest concern of all is how to prepare older adults for new and emerging E-health technologies. Currently, there is a dearth of literature in this area because it is a newer area of research and little is known about it. The benefits and consequences of technology being integrated into daily living are being investigated as a newer area of research. Several of the articles (N=11) indicated that age is one of the larger factors contributing to the digital divide. Similarly, many of the examined articles (N=5) identify that privacy concerns were one of the main deterrents of technology usage for elderly individuals aged 65 and above. The older adult generation feels that privacy is one of the major concerns, especially in regards to how data is collected, used and possibly sold to third party groups by various websites. Additionally, access to technology, the Internet, and infrastructure also plays a large part in the way that individuals are able to receive and use information. Lastly, a change in the way that healthcare is currently used, received and distributed would also help attribute to the change to ensure that no generation is left behind in a technologically advanced society.

Keywords: digital divide, e-health, millennials, older adults

Procedia PDF Downloads 172
1871 Little Retrieval Augmented Generation for Named Entity Recognition: Toward Lightweight, Generative, Named Entity Recognition Through Prompt Engineering, and Multi-Level Retrieval Augmented Generation

Authors: Sean W. T. Bayly, Daniel Glover, Don Horrell, Simon Horrocks, Barnes Callum, Stuart Gibson, Mac Misuira

Abstract:

We assess suitability of recent, ∼7B parameter, instruction-tuned Language Models Mistral-v0.3, Llama-3, and Phi-3, for Generative Named Entity Recognition (GNER). Our proposed Multi-Level Information Retrieval method achieves notable improvements over finetuned entity-level and sentence-level methods. We consider recent developments at the cross roads of prompt engineering and Retrieval Augmented Generation (RAG), such as EmotionPrompt. We conclude that language models directed toward this task are highly capable when distinguishing between positive classes (precision). However, smaller models seem to struggle to find all entities (recall). Poorly defined classes such as ”Miscellaneous” exhibit substantial declines in performance, likely due to the ambiguity it introduces to the prompt. This is partially resolved through a self verification method using engineered prompts containing knowledge of the stricter class definitions, particularly in areas where their boundaries are in danger of overlapping, such as the conflation between the location ”Britain” and the nationality ”British”. Finally, we explore correlations between model performance on the GNER task with performance on relevant academic benchmarks.

Keywords: generative named entity recognition, information retrieval, lightweight artificial intelligence, prompt engineering, personal information identification, retrieval augmented generation, self verification

Procedia PDF Downloads 46
1870 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 113
1869 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 120
1868 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 462
1867 Coherence and Cohesion in IELTS Academic Writing: Helping Students to Improve

Authors: Rory Patrick O'Kane

Abstract:

More universities and third level institutions now require at least an IELTS Band 6 for entry into courses of study for non-native speakers of English. This presentation focuses on IELTS Academic Writing Tasks 1 and 2 and in particular on the marking criterion of Coherence and Cohesion. A requirement for candidates aiming at Band 6 and above is that they produce answers which show a clear, overall progression of information and ideas and which use cohesive devices effectively. With this in mind, the presenter will examine what exactly is meant by coherence and cohesion and various strategies which can be used to assist students in improving their scores in this area. A number of classroom teaching ideas will be introduced, and participants will have the opportunity to compare and discuss sample answers written by candidates for this examination with a specific focus on coherence and cohesion. Intended audience: Teachers of IELTS Academic Writing.

Keywords: coherence, cohesion, IELTS, strategies

Procedia PDF Downloads 270
1866 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico

Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada

Abstract:

In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.

Keywords: GIS, landslide, modeling, LOGISNET, SINMAP

Procedia PDF Downloads 313
1865 How Do L1 Teachers Assess Haitian Immigrant High School Students in Chile?

Authors: Gloria Toledo, Andrea Lizasoain, Leonardo Mena

Abstract:

Immigration has largely increased in Chile in the last 20 years. About 6.6% of our population is foreign, from which 14.3% is Haitian. Haitians are between 15 and 29 years old and have come to Chile escaping from a social crisis. They believe that education and work will help them do better in life. Therefore, rates of Haitian students in the Chilean school system have also increased: there were 3,121 Haitian students enrolled in 2017. This is a challenge for the public school, which takes in young people who must face schooling, social immersion and learning of a second language simultaneously. The linguistic barrier affects both students’ and teachers’ adaptation process, which has an impact on the students’ academic performance and consequent acquisition of Spanish. In order to explore students’ academic performance and interlanguage development, we examined how L1 teachers assess Haitian high school students’ written production in Spanish. With this purpose, teachers were asked to use a specially designed grid to assess correction, accommodation, lexical and analytical complexity, organization and fluency of both Haitian and Chilean students. Parallelly, texts were approached from an error analysis perspective. Results from grids and error analysis were then compared. On the one hand, it has been found that teachers give very little feedback to students apart from scores and grades, which does not contribute to the development of the second language. On the other hand, error analysis has yielded that Haitian students are in a dynamic process of the acquisition of Spanish, which could be enhanced if L1 teacher were aware of the process of interlanguage developmen.

Keywords: assessment, error analysis, grid, immigration, Spanish aquisition, writing

Procedia PDF Downloads 136
1864 How Validated Nursing Workload and Patient Acuity Data Can Promote Sustained Change and Improvements within District Health Boards. the New Zealand Experience

Authors: Rebecca Oakes

Abstract:

In the New Zealand public health system, work has been taking place to use electronic systems to convey data from the ‘floor to the board’ that makes patient needs, and therefore nursing work, visible. For nurses, these developments in health information technology puts us in a very new and exciting position of being able to articulate the work of nursing through a language understood at all levels of an organisation, the language of acuity. Nurses increasingly have a considerable stake-hold in patient acuity data. Patient acuity systems, when used well, can assist greatly in demonstrating how much work is required, the type of work, and when it will be required. The New Zealand Safe Staffing Unit is supporting New Zealand nurses to create a culture of shared governance, where nursing data is informing policies, staffing methodologies and forecasting within their organisations. Assisting organisations to understand their acuity data, strengthening user confidence in using electronic patient acuity systems, and ensuring nursing and midwifery workload is accurately reflected is critical to the success of the safe staffing programme. Nurses and midwives have the capacity via an acuity tool to become key informers of organisational planning. Quality patient care, best use of health resources and a quality work environment are essential components of a safe, resilient and well resourced organisation. Nurses are the key informers of this information. In New Zealand a national level approach is paving the way for significant changes to the understanding and use of patient acuity and nursing workload information.

Keywords: nursing workload, patient acuity, safe staffing, New Zealand

Procedia PDF Downloads 382
1863 Programming without Code: An Approach and Environment to Conditions-On-Data Programming

Authors: Philippe Larvet

Abstract:

This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.

Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation

Procedia PDF Downloads 221
1862 Finding the Right Regulatory Path for Islamic Banking

Authors: Meysam Saidi

Abstract:

While the specific externalities and required regulatory measures in relation to Islamic banking are fairly uncertain, the business is growing across the world. Unofficial data indicate that the Islamic Finance market is growing with annual rate of 15% and it has reached 1.3 $ trillion size. This trend is associated with inherent systematic connection of Islamic financial institutions to other entities and different sectors of economies. Islamic banking has been subject of market development policies in major economies, most notably the UK. This trend highlights the need for identification of distinct risk features of Islamic banking and crafting customized regulatory measures. So far there has not been a significant systemic crisis in this market which can be attributed to its distinct nature. However, the significant growth and spread of its products worldwide necessitate an in depth study of its nature for customized congruent regulatory measures. In the post financial crisis era some market analysis and reports suggested that the Islamic banks fairly weathered the crisis. As far as heavily blamed conventional financial products such as subprime mortgage backed securities and speculative credit default swaps were concerned the immunity claim can be considered true, as Islamic financial institutions were not directly exposed to such products. Nevertheless, similar to the experience of the conventional banking industry, it can be only a matter of time for Islamic banks to face failures that can be specific to the nature of their business. Using the experience of conventional banking regulations and identifying those peculiarities of Islamic banking that need customized regulatory approach can aid to prevent major failures. Frank Knight has stated that “We perceive the world before we react to it, and we react not to what we perceive, but always to what we infer”. The debate over congruent Islamic banking regulations might not be an exception to Frank Knight’s statement but I will try to base my discussion on concrete evidences. This paper first analyzes both theoretical and actual features of Islamic banking in order to ascertain to its peculiarities in terms of market stability and other externalities. Next, the paper discusses distinct features of Islamic financial transactions and banking which might require customized regulatory measures. Finally, the paper explores how a more transparent path for the Islamic banking regulations can be drawn.

Keywords: Islamic banking, regulation, risks, capital requirements, customer protection, financial stability

Procedia PDF Downloads 409
1861 Pros and Cons of Teaching/Learning Online during COVID-19: English Department at Tahri Muhammed University of Bechar as a Case Study

Authors: Fatiha Guessabi

Abstract:

Students of the Tahri Muhammed University of Bechar shifted to the virtual platform using E-learning platforms when the lockdown started due to the Coronavirus. This paper aims to explore the advantages and inconveniences of online learning and teaching in EFL classes at Tahri Mohammed University. For this investigation, a questionnaire was addressed to EFL students and an interview was arranged with EFL teachers. Data analysis was obtained from 09 teachers and 70 students. After the investigation, the results show that some of the most applied educational technologies and applications are used to turn online EFL classes effectively exciting. Thus, EFL classes became more interactive. Although learners give positive viewpoints about online learning/teaching, they prefer to learn in the classroom.

Keywords: advantages, disadvantages, COVID19, EFL, online learning/teaching, university of Bechar

Procedia PDF Downloads 164