Search results for: lexical retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 538

Search results for: lexical retrieval

148 Arabic Light Word Analyser: Roles with Deep Learning Approach

Authors: Mohammed Abu Shquier

Abstract:

This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.

Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN

Procedia PDF Downloads 8
147 Deep Well-Grounded Magnetite Anode Chains Retrieval and Installation for Raslanuf Complex Impressed Current Cathodic Protection System Rectification

Authors: Mohamed Ahmed Khalil

Abstract:

The number of deep well anode ground beds (GBs) have been retrieved due to unoperated anode chains. New identical magnetite anode chains (MAC) have been installed at Raslanuf complex impressed current Cathodic protection (ICCP) system, distributed at different plants (Utility, ethylene and polyethylene). All problems associated with retrieving and installation of MACs have been discussed, rectified and presented. All GB-associated severely corroded wellhead casings were well maintained and/or replaced by new fabricated and modified ones. The main cause of the wellhead casing's severe internal corrosion was discussed and the conducted remedy action to overcome future corrosion problems is presented. All GB-connected anode junction boxes (AJBs) and shunts were closely inspected, maintained and necessary replacement and/or modifications were carried out on shunts. All damaged GB concrete foundations (CF) have been inspected and completely replaced. All GB-associated Transformer-Rectifiers Units (TRU) were subjected to thorough inspection and necessary maintenance was performed on each individual TRU. After completion of all MACs and TRU maintenance activities, each cathodic protection station (CPS) has been re-operated, alternative current (AC), direct current (DC), voltage and structure to soil potential (S/P) measurements have been conducted, recorded and all obtained test results are presented. DC current outputs have been adjusted and DC current outputs of each MAC have been recorded for each GB AJB.

Keywords: magnetite anodes, deep well, ground beds, cathodic protection, transformer rectifier, impressed current, junction boxes

Procedia PDF Downloads 89
146 Semantic Processing in Chinese: Category Effects, Task Effects and Age Effects

Authors: Yi-Hsiu Lai

Abstract:

The present study aimed to elucidate the nature of semantic processing in Chinese. Language and cognition related to the issue of aging are examined from the perspective of picture naming and category fluency tasks. Twenty Chinese-speaking adults (ranging from 25 to 45 years old) and twenty Chinese-speaking seniors (ranging from 65 to 75 years old) in Taiwan participated in this study. Each of them individually completed two tasks: a picture naming task and a category fluency task. Instruments for the naming task were sixty black-and-white pictures: thirty-five object and twenty-five action pictures. Category fluency task also consisted of two semantic categories – objects (or nouns) and actions (or verbs). Participants were asked to report as many items within a category as possible in one minute. Scores of action fluency and of object fluency were a summation of correct responses in these two categories. Category effects (actions vs. objects) and age effects were examined in these tasks. Objects were further divided into two major types: living objects and non-living objects. Actions were also categorized into two major types: action verbs and process verbs. Reaction time to each picture/question was additionally calculated and analyzed. Results of the category fluency task indicated that the content of information in Chinese seniors was comparatively deteriorated, thus producing smaller number of semantic-lexical items. Significant group difference was also found in the results of reaction time. Category Effect was significant for both Chinese adults and seniors in the semantic fluency task. Findings in the present study helped characterize the nature of semantic processing in Chinese-speaking adults and seniors and contributed to the issue of language and aging.

Keywords: semantic processing, aging, Chinese, category effects

Procedia PDF Downloads 335
145 A Prototype of an Information and Communication Technology Based Intervention Tool for Children with Dyslexia

Authors: Rajlakshmi Guha, Sajjad Ansari, Shazia Nasreen, Hirak Banerjee, Jiaul Paik

Abstract:

Dyslexia is a neurocognitive disorder, affecting around fifteen percent of the Indian population. The symptoms include difficulty in reading alphabet, words, and sentences. This can be difficult at the phonemic or recognition level and may further affect lexical structures. Therapeutic intervention of dyslexic children post assessment is generally done by special educators and psychologists through one on one interaction. Considering the large number of children affected and the scarcity of experts, access to care is limited in India. Moreover, unavailability of resources and timely communication with caregivers add on to the problem of proper intervention. With the development of Educational Technology and its use in India, access to information and care has been improved in such a large and diverse country. In this context, this paper proposes an ICT enabled home-based intervention program for dyslexic children which would support the child, and provide an interactive interface between expert, parents, and students. The paper discusses the details of the database design and system layout of the program. Along with, it also highlights the development of different technical aids required to build out personalized android applications for the Indian dyslexic population. These technical aids include speech database creation for children, automatic speech recognition system, serious game development, and color coded fonts. The paper also emphasizes the games developed to assist the dyslexic child on cognitive training primarily for attention, working memory, and spatial reasoning. In addition, it talks about the specific elements of the interactive intervention tool that makes it effective for home based intervention of dyslexia.

Keywords: Android applications, cognitive training, dyslexia, intervention

Procedia PDF Downloads 271
144 Designing a Syllabus for an Academic Writing Course Instruction Based on Students' Needs

Authors: Nuur Insan Tangkelangi

Abstract:

Needs on academic writing competence as the primary focus in higher education encourage the university institutions around the world to provide academic writing courses to support their students dealing with their tasks pertaining to this competence. However, a pilot study conducted previously in one of the universities in Palopo, a city in South Sulawesi, revealed that even though the institution has provided academic writing courses, supported by some workshops related to academic writing and some supporting facilities at campus, the students still face difficulties in completing their assignments related to academic writing, particularly in writing their theses. The present study focuses on investigating the specific needs of the students in the same institution in terms of competences required in academic writing. It is also carried out to examine whether the syllabus exists and accommodates the students’ needs or not. Questionnaire and interview were used to collect data from sixty students of sixth semester and two lecturers of the academic courses. The results reveal that the students need to learn all aspects of linguistic competence (language features, lexical phrases, academic language and vocabulary, and proper language) and some aspects in discourse competence (how to write introduction, search for appropriate literature, design research method, write coherent paragraphs, refer to sources, summarize and display data, and link sentences smoothly). Regarding the syllabus, it is found that the academic writing courses provided in the institution, where this study takes place, do not have syllabus. This condition is different from other institutions which provide syllabi for all courses. However, at the commencement of the course, the students and the lecturers have negotiated their learning goals, topics discussed, learning activities, and assessment criteria for the course. Therefore, even though the syllabus does not exist, but the elements of the syllabus are there. The negotiation between the students and the lecturers contributes to the students’ attitude toward the courses. The students are contented with the course and they feel that their needs in academic writing have been accommodated. However, some suggestions for the next academic writing courses are stated by the students. Considering the results of this study, a syllabus is then proposed which is expected to accommodate the specific needs of students in that institution.

Keywords: Students' needs, academic writing, syllabus design for instruction, case study

Procedia PDF Downloads 182
143 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 401
142 The Experience of Head Nurse: Phenomenological Research of Implementing Islamic Leadership Style in Syarif Hidayatullah Hospital

Authors: Jamaludin Tarkim, Yoga Teguh Guntara, Maftuhah

Abstract:

Islamic leadership style is model of leadership style applied by the Prophet Muhammad SAW. Islamic leadership style is applied, namely Syura (deliberation), ‘Adl bil qisth (justice, with equality), and Hurriyyah al-kalam (freedom of expression) and along with the values of Islam in the Islamic leadership style. This research aims to gain an overview of the experience of Head Nurse in the implementation of Islamic leadership style. This research is a qualitative one with descriptive phenomenology design through in-depth interviews. Participants were occupied as Head Nurse at the Hospital room Syarif Hidayatullah, set directly (purposive) with the principle of suitability (appropriateness) and sufficiency (adequacy). Retrieval of data and research conducted during the month of June 2014. Data collected in the form of recording in-depth interviews and analysis with Collazi method. This research identified four themes Syura (deliberation);‘Adl bil qisth (justice, with equality); Hurriyyah al-kalam (freedom of expression) and along with the values of Islam in the Islamic leadership style. The results of this research can provide a review of the Head Room experience in the application of Islamic leadership style at Syarif Hidayatullah Hospital already skilled leadership during the process, but the application is still not maximized. Required further research on in-depth exploration of how to get more comprehensive results from room Head Nurse experience in the application of Islamic leadership style, as well as subsequent researchers can choose a wider scope and complex so get more complete data.

Keywords: experience, Islamic leadership style, head nurse, nursing management

Procedia PDF Downloads 148
141 Detecting Hate Speech And Cyberbullying Using Natural Language Processing

Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão

Abstract:

Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.

Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning

Procedia PDF Downloads 198
140 Problem Gambling in the Conceptualization of Health Professionals: A Qualitative Analysis of the Discourses Produced by Psychologists, Psychiatrists and General Practitioners

Authors: T. Marinaci, C. Venuleo

Abstract:

Different conceptualizations of disease affect patient care. This study aims to address this gap. It explores how health professionals conceptualize gambling problem, addiction and the goals of recovery process. In-depth, semi-structured, open-ended interviews were conducted with Italian psychologists, psychiatrists, general practitioners, and support staff (N= 114), working within health centres for the treatment of addiction (public health services or therapeutic communities) or medical offices. A Lexical Correspondence Analysis (LCA) was applied to the verbatim transcripts. LCA allowed to identify two main factorial dimensions, which organize similarity and dissimilarity in the discourses of the interviewed. The first dimension labelled 'Models of relationship with the problem', concerns two different models of relationship with the health problem: one related to the request for help and the process of taking charge and the other related to the identification of the psychopathology underlying the disorder. The second dimension, labelled 'Organisers of the intervention' reflects the dialectic between two ways to address the problem. On the one hand, they are the gambling dynamics and its immediate life-consequences to organize the intervention (whatever the request of the user is); on the other hand, they are the procedures and the tools which characterize the health service to organize the way the professionals deal with the user’ s problem (whatever it is and despite the specify of the user’s request). The results highlight how, despite the differences, the respondents share a central assumption: understanding gambling problem implies the reference to the gambler’s identity, more than, for instance, to the relational, social, cultural or political context where the gambler lives. A passive stance is attributed to the user, who does not play any role in the definition of the goal of the intervention. The results will be discussed to highlight the relationship between professional models and users’ ways to understand and deal with the problems related to gambling.

Keywords: cultural models, health professionals, intervention models, problem gambling

Procedia PDF Downloads 122
139 Argumentation Frameworks and Theories of Judging

Authors: Sonia Anand Knowlton

Abstract:

With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.

Keywords: computer science, mathematics, law, legal theory, judging

Procedia PDF Downloads 38
138 The Contribution of Corpora to the Investigation of Cross-Linguistic Equivalence in Phraseology: A Contrastive Analysis of Russian and Italian Idioms

Authors: Federica Floridi

Abstract:

The long tradition of contrastive idiom research has essentially been focusing on three domains: the comparison of structural types of idioms (e.g. verbal idioms, idioms with noun-phrase structure, etc.), the description of idioms belonging to the same thematic groups (Sachgruppen), the identification of different types of cross-linguistic equivalents (i.e. full equivalents, partial equivalents, phraseological parallels, non-equivalents). The diastratic, diachronic and diatopic aspects of the compared idioms, as well as their syntactic, pragmatic and semantic properties, have been rather ignored. Corpora (both monolingual and parallel) give the opportunity to investigate the actual use of correlating idioms in authentic texts of L1 and L2. Adopting the corpus-based approach, it is possible to draw attention to the frequency of occurrence of idioms, their syntactic embedding, their potential syntactic transformations (e.g., nominalization, passivization, relativization, etc.), their combinatorial possibilities, the variations of their lexical structure, their connotations in terms of stylistic markedness or register. This paper aims to present the results of a contrastive analysis of Russian and Italian idioms referring to the concepts of ‘beginning’ and ‘end’, that has been carried out by using the Russian National Corpus and the ‘La Repubblica’ corpus. Beyond the digital corpora, bilingual dictionaries, like Skvorcova - Majzel’, Dobrovol’skaja, Kovalev, Čerdanceva, as well as monolingual resources, have been consulted. The study has shown that many of the idioms that have been traditionally indicated as cross-linguistic equivalents on bilingual dictionaries cannot be considered correspondents. The findings demonstrate that even those idioms, that are formally identical in Russian and Italian and are presumably derived from the same source (e.g., conceptual metaphor, Bible, classical mythology, World literature), exhibit differences regarding usage. The ultimate purpose of this article is to highlight that it is necessary to review and improve the existing bilingual dictionaries considering the empirical data collected in corpora. The materials gathered in this research can contribute to this sense.

Keywords: corpora, cross-linguistic equivalence, idioms, Italian, Russian

Procedia PDF Downloads 118
137 Lithuanian Sign Language Literature: Metaphors at the Phonological Level

Authors: Anželika Teresė

Abstract:

In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.

Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics

Procedia PDF Downloads 106
136 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support

Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz

Abstract:

The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.

Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.

Procedia PDF Downloads 90
135 First Formaldehyde Retrieval Using the Raw Data Obtained from Pandora in Seoul: Investigation of the Temporal Characteristics and Comparison with Ozone Monitoring Instrument Measurement

Authors: H. Lee, J. Park

Abstract:

In this present study, for the first time, we retrieved the Formaldehyde (HCHO) Vertical Column Density (HCHOVCD) using Pandora instruments in Seoul, a megacity in northeast Asia, for the period between 2012 and 2014 and investigated the temporal characteristics of HCHOVCD. HCHO Slant Column Density (HCHOSCD) was obtained using the Differential Optical Absorption Spectroscopy (DOAS) method. HCHOSCD was converted to HCHOVCD using geometric Air Mass Factor (AMFG) as Pandora is the direct-sun measurement. The HCHOVCDs is low at 12:00 Local Time (LT) and is high in the morning (10:00 LT) and late afternoon (16:00 LT) except for winter. The maximum (minimum) values of Pandora HCHOVCD are 2.68×1016 (1.63×10¹⁶), 3.19×10¹⁶ (2.23×10¹⁶), 2.00×10¹⁶ (1.26×10¹⁶), and 1.63×10¹⁶ (0.82×10¹⁶) molecules cm⁻² in spring, summer, autumn, and winter, respectively. In terms of seasonal variations, HCHOVCD was high in summer and low in winter which implies that photo-oxidation plays an important role in HCHO production in Seoul. In comparison with the Ozone Monitoring Instrument (OMI) measurements, the HCHOVCDs from the OMI are lower than those from Pandora. The correlation coefficient (R) between monthly HCHOVCDs values from Pandora and OMI is 0.61, with slop of 0.35. Furthermore, to understand HCHO mixing ratio within Planetary Boundary Layer (PBL) in Seoul, we converted Pandora HCHOVCDs to HCHO mixing ratio in the PBL using several meteorological input data from the Atmospheric InfraRed Sounder (AIRS). Seasonal HCHO mixing ratio in PBL converted from Pandora (OMI) HCHOVCDs are estimated to be 6.57 (5.17), 7.08 (6.68), 7.60 (4.70), and 5.00 (4.76) ppbv in spring, summer, autumn, and winter, respectively.

Keywords: formaldehyde, OMI, Pandora, remote sensing

Procedia PDF Downloads 129
134 Lexical Knowledge of Verb Particle Constructions with the Particle on by Mexican English Learners

Authors: Sarai Alvarado Pineda, Ricardo Maldonado Soto

Abstract:

The acquisition of Verb Particle Constructions is a challenge for Spanish speakers learning English. The acquisition is particularly difficult for speakers of languages with no verb particle constructions. The purpose of the current study is to define the procedural steps in the acquisition of constructions with the particle on. There are three outstanding meanings for the particle on; Surface: The movie is based on a true story, Activation: John turn on the light, Continuity: The band played on all night. The central aim of this study is to measure how Mexican Spanish participants respond to both the three meanings mentioned above and the degree of meaning transparency/opacity of on verb particle constructions. Forty Mexican Spanish learners of English (20 basic and 20 advanced) are compared against a control group of 20 American native English speakers through a reaction time test (PsychoPy2 2015). The participants were asked to discriminate 90 items based on their knowledge of these constructions. There are 30 items per meaning divided into two groups of transparent and opaque meaning. Results revealed three major findings: Advanced students have a reaction time similar to that of native speakers (advanced 4.5s versus native 3.7s), while students with a lower level of English proficiency, show a high reaction time (7s). Likewise, there is a shorter reaction time in constructions with lower opacity in the three groups of participants, with differences between each level (basic 6.7s, advanced 4.3s, and native 3.4s). Finally, a difference in reaction time can be identified according to the meaning provided by the construction. The reaction time for the activation category (5.27s) is greater than continuity (5.04s), and this category is also slower than the surface (4.94s). The study shows that the level of sensitivity of English learners increases significantly aiming towards native speaker patterns as determined by the level of transparency of meaning of each construction as well as the degree of entrenchment of each constructional meaning.

Keywords: meaning of the particle, opacity, reaction time, verb particle constructions

Procedia PDF Downloads 242
133 Land Cover, Land Surface Temperature, and Urban Heat Island Effects in Tropical Sub Saharan City of Accra

Authors: Eric Mensah

Abstract:

The effects of rapid urbanisation of tropical sub-Saharan developing cities on local and global climate are of great concern due to the negative impacts of Urban Heat Island (UHI) effects. The importance of urban parks, vegetative cover and forest reserves in these tropical cities have been undervalued with a rapid degradation and loss of these vegetative covers to urban developments which continue to cause an increase in daily mean temperatures and changes to local climatic conditions. Using Landsat data of the same months and period intervals, the spatial variations of land cover changes, temperature, and vegetation were examined to determine how vegetation improves local temperature and the effects of urbanisation on daily mean temperatures over the past 12 years. The remote sensing techniques of maximum likelihood supervised classification, land surface temperature retrieval technique, and normalised differential vegetation index techniques were used to analyse and create the land use land cover (LULC), land surface temperature (LST), and vegetation and non-vegetation cover maps respectively. Results from the study showed an increase in daily mean temperature by 0.80 °C as a result of rapid increase in urban area by 46.13 sq. km and loss of vegetative cover by 46.24 sq. km between 2005 and 2017. The LST map also shows the existence of UHI within the urban areas of Accra, the potential mitigating effects offered by the existence of forest and vegetative cover as demonstrated by the existence of cool islands around the Achimota ecological forest and University of Ghana botanical gardens areas.

Keywords: land surface temperature, climate, remote sensing, urbanisation

Procedia PDF Downloads 296
132 Development of a Web-Based Application for Intelligent Fertilizer Management in Rice Cultivation

Authors: Hao-Wei Fu, Chung-Feng Kao

Abstract:

In the era of rapid technological advancement, information technology (IT) has become integral to modern life, exerting significant influence across diverse sectors and serving as a catalyst for development in various industries. Within agriculture, the integration of IT offers substantial benefits, notably enhancing operational efficiency. Real-time monitoring systems, for instance, have been widely embraced in agriculture, effectively improving crop management practices. This study specifically addresses the management of rice panicle fertilizer, presenting the development of a web application tailored to handle data associated with rice panicle fertilizer management. Leveraging the normalized difference red edge index, this application optimizes the quantity of rice panicle fertilizer used, providing recommendations to agricultural stakeholders and service providers in the agricultural information sector. The overarching objective is to minimize costs while maximizing yields. Furthermore, a robust database system has been established to store and manage relevant data for future reference in rice cultivation management. Additionally, the study utilizes the Representational State Transfer software architectural style to construct an application programming interface (API), facilitating data creation, retrieval, updating, and deletion for users via the HyperText Transfer Protocol methods. Future plans involve integrating this API with third-party services to incorporate it into larger frameworks, thus catering to the diverse requirements of various third-party services.

Keywords: application programming interface, HyperText Transfer Protocol, nitrogen fertilizer intelligent management, web-based application

Procedia PDF Downloads 31
131 An Analysis on Aid for Migrants: A Descriptive Analysis on Official Development Assistance During the Migration Crisis

Authors: Elena Masi, Adolfo Morrone

Abstract:

Migration has recently become a mainstream development sector and is currently at the forefront in institutional and civil society context. However, no consensus exists on how the link between migration and development operates, that is how development is related to migration and how migration can promote development. On one hand, Official Development Assistance is recognized to be one of the levers to development. On the other hand, the debate is focusing on what should be the scope of aid programs targeting migrants groups and in general the migration process. This paper provides a descriptive analysis on how development aid for migration was allocated in the recent past, focusing on the actions that were funded and implemented by the international donor community. In the absence of an internationally shared methodology for defining the boundaries of development aid on migration, the analysis based on lexical hypotheses on the title or on the short description of initiatives funded by several Organization for Economic Co-operation and Development (OECD) countries. Moreover, the research describes and quantifies aid flows for each country according to different criteria. The terms migrant and refugee are used to identify the projects in accordance with the most internationally agreed definitions and only actions in countries of transit or of origin are considered eligible, thus excluding the amount sustained for refugees in donor countries. The results show that the percentage of projects targeting migrants, in terms of amount, has followed a growing trend from 2009 to 2016 in several European countries, and is positively correlated with the flows of migrants. Distinguishing between programs targeting migrants and programs targeting refugees, some specific national features emerge more clearly. A focus is devoted to actions targeting the root causes of migration, showing an inter-sectoral approach in international aid allocation. The analysis gives some tentative solutions to the lack of consensus on language on migration and development aid, and emphasizes the need to internationally agree on a criterion for identifying programs targeting both migrants and refugees, to make action more transparent and in order to develop effective strategies at the global level.

Keywords: migration, official development assistance, ODA, refugees, time series

Procedia PDF Downloads 111
130 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL

Procedia PDF Downloads 279
129 Efficient Chess Board Representation: A Space-Efficient Protocol

Authors: Raghava Dhanya, Shashank S.

Abstract:

This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.

Keywords: chess, optimisation, encoding, bit manipulation

Procedia PDF Downloads 19
128 Procedures and Strategies in Translation: Two Marathi Translations of Train to Pakistan by Khushwant Singh

Authors: Manoj Gujar

Abstract:

The present paper is an attempt to interpret two Marathi translations of Khushwant Singh’s (1915-2014) novel Train to Pakistan (1956). The 20th century was branded as an era of Liberalization, Privatization and Globalization. Different countries and cultures have enunciated interaction with one another in an unprecedented manner. The world is becoming multilingual and multicultural. The democratic countries such as the U.S.A., the U.K., and India have become pivotal centers of interlingual and cross-cultural exchange. People belonging to different nationalities showed keen interest in knowing the characteristic features of different languages and of their cultures. Here, ‘Translation’ plays an important role in such multilingual and multicultural contexts. Translation is not only translation of a language but a translation of a culture. However, in the act of translation a translator makes use of such procedures as borrowing, definition, literal translation, substitution, lexical creation, omission, addition as well as their various combinations. To him, a text produced in one linguistic and cultural context can reach other linguistic and cultural contexts through these processes of translation. A worthy work of art appeals many readers. India, being a multilingual country we find that there goes multiple translations of the same text in different Indian languages. But sometimes, if can be found that a same text appeals to different ages and the same text gets translated into the same language by the two or more authors. In this reference, the present paper is an attempt to study how different translations of the same text differ in terms of procedures and strategies during the process of the translation of culture. The source text is Khushwant Singh’s historical novel Train to Pakistan (1956). The novel was widely appreciated and so translated into different regional languages in India. The novel has two Marathi translations: Agniratha (1972) by Hidayatkhan and Train to Pakistan (1980) by Anil Kinikar. This paper is an attempt to evaluate the strategies and procedures in translation to analyze these two Marathi translations. Hidayat Khan made a lot of omissions of the significant details and distorted the original text to a large extent, whereas, Anil Kinikar has done justice to the Source Text by rendering it in Marathi as faithfully as possible.

Keywords: culture, multilingual, procedures and strategies, translation

Procedia PDF Downloads 348
127 The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of College Freshman Students

Authors: Lota Largavista

Abstract:

This study entitled, ‘The Organizational Structure, Development Features, and Metadiscoursal Elements in the Expository Writing of Freshman College Writers’ aimed to examine essays written by college students. It seeks to examine the organizational structure and development features of the essays and describe their defining characteristics, the linguistic elements at both macrostructural and microstructural discourse levels and the types of textual and interpersonal metadiscourse markers that are employed in order to negotiate meanings with their prospective readers. The different frameworks used to analyze the essays include Toulmin’s ( 1984) model for argument structure, Olson’s ( 2003) three-part essay structure; Halliday and Matthiesen (2004) in Herriman (2011) notions of thematic structure, Danes (1974) thematic progression or method of development, Halliday’s (2004) concept of grammatical and lexical cohesion ;Hyland’s (2005) metadiscourse strategies; and Chung and Nation’s( 2003) four-step scale for technical vocabulary. This descriptive study analyzes qualitatively and quantitatively how freshman students generally express their written compositions. Coding of units is done to determine what linguistic features are present in the essays. Findings revealed that students’ expository essays observe a three-part structure having all three moves, the Introduction, the Body and the Conclusion. Stance assertion, stance support, and emerging moves/strategies are found to be employed in the essays. Students have more marked themes on the essays and also prefer constant theme progression as their method of development. The analysis of salient linguistic elements reveals frequently used cohesive devices and metadiscoursal strategies. Based on the findings, an instructional learning plan is being proposed. This plan is characterized by a genre approach that focuses on expository and linguistic conventions.

Keywords: metadiscourse, organization, theme progression, structure

Procedia PDF Downloads 201
126 A Critical Discourse Analysis of Citizenship Education Textbook for Primary School Students in Singapore

Authors: Ren Boyuan

Abstract:

This study focuses on how the Character and Citizenship Education textbook in Singapore primary schools deliver preferred and desired qualities to students and therefore reveals how discourse in textbooks can facilitate and perpetuate certain social practices. In this way, this study also serves to encourage the critical thinking of textbook writers and school educators by unveiling the nuanced message through language use that facilitates the perpetuation of social practices in a society. In Singapore, Character and Citizenship Education is a compulsory subject for primary school students. Under the framework of 21st Century Competencies, Character and Citizenship Education in Singapore aims to help students thrive in this fast-changing world. The Singapore government is involved in the development of CCE curriculum in schools from primary schools to pre-university. Inevitably, the CCE curriculum is not free from ideological influences. This qualitative study utilizes Fairclough’s three-dimensional theory and his framework of three assumptions to analyze the Character and Citizenship Education textbook for Primary 1 and to reveal ideologies in this textbook. Data for the analysis in this study are the textual parts of the whole textbook for Primary 1 students as this book is used at the beginning of citizenship education in primary schools. It is significant because it promotes messages about CCE to the foundation years of a child's education. The findings of this study show that the four revealed ideologies, namely pragmatism, communitarianism, nationalism, and multiculturalism, are not only dated back in the national history but also updated and explained by the current demands for Singapore’s thriving and prosperity in a sustainable term. This study ends with a discussion of the implications of this study. By pointing out the ideologies in this textbook and how they are embedded in the discourse, this study may help teachers and textbook writers realize the possible political involvement in the book and therefore develop their recognition of the implicit influence of lexical choice on their teaching and writing. In addition, by exploring the ideologies in this book and comparing them with ideologies in past textbooks, this study helps researchers in this area on how language influences readers and reflects certain social demands.

Keywords: citizenship education, critical discourse analysis, sociolinguistics, textbook analysis

Procedia PDF Downloads 33
125 Research on Construction of Subject Knowledge Base Based on Literature Knowledge Extraction

Authors: Yumeng Ma, Fang Wang, Jinxia Huang

Abstract:

Researchers put forward higher requirements for efficient acquisition and utilization of domain knowledge in the big data era. As literature is an effective way for researchers to quickly and accurately understand the research situation in their field, the knowledge discovery based on literature has become a new research method. As a tool to organize and manage knowledge in a specific domain, the subject knowledge base can be used to mine and present the knowledge behind the literature to meet the users' personalized needs. This study designs the construction route of the subject knowledge base for specific research problems. Information extraction method based on knowledge engineering is adopted. Firstly, the subject knowledge model is built through the abstraction of the research elements. Then under the guidance of the knowledge model, extraction rules of knowledge points are compiled to analyze, extract and correlate entities, relations, and attributes in literature. Finally, a database platform based on this structured knowledge is developed that can provide a variety of services such as knowledge retrieval, knowledge browsing, knowledge q&a, and visualization correlation. Taking the construction practices in the field of activating blood circulation and removing stasis as an example, this study analyzes how to construct subject knowledge base based on literature knowledge extraction. As the system functional test shows, this subject knowledge base can realize the expected service scenarios such as a quick query of knowledge, related discovery of knowledge and literature, knowledge organization. As this study enables subject knowledge base to help researchers locate and acquire deep domain knowledge quickly and accurately, it provides a transformation mode of knowledge resource construction and personalized precision knowledge services in the data-intensive research environment.

Keywords: knowledge model, literature knowledge extraction, precision knowledge services, subject knowledge base

Procedia PDF Downloads 134
124 Social-Cognitive Aspects of Interpretation: Didactic Approaches in Language Processing and English as a Second Language Difficulties in Dyslexia

Authors: Schnell Zsuzsanna

Abstract:

Background: The interpretation of written texts, language processing in the visual domain, in other words, atypical reading abilities, also known as dyslexia, is an ever-growing phenomenon in today’s societies and educational communities. The much-researched problem affects cognitive abilities and, coupled with normal intelligence normally manifests difficulties in the differentiation of sounds and orthography and in the holistic processing of written words. The factors of susceptibility are varied: social, cognitive psychological, and linguistic factors interact with each other. Methods: The research will explain the psycholinguistics of dyslexia on the basis of several empirical experiments and demonstrate how domain-general abilities of inhibition, retrieval from the mental lexicon, priming, phonological processing, and visual modality transfer affect successful language processing and interpretation. Interpretation of visual stimuli is hindered, and the problem seems to be embedded in a sociocultural, psycholinguistic, and cognitive background. This makes the picture even more complex, suggesting that the understanding and resolving of the issues of dyslexia has to be interdisciplinary, aided by several disciplines in the field of humanities and social sciences, and should be researched from an empirical approach, where the practical, educational corollaries can be analyzed on an applied basis. Aim and applicability: The lecture sheds light on the applied, cognitive aspects of interpretation, social cognitive traits of language processing, the mental underpinnings of cognitive interpretation strategies in different languages (namely, Hungarian and English), offering solutions with a few applied techniques for success in foreign language learning that can be useful advice for the developers of testing methodologies and measures across ESL teaching and testing platforms.

Keywords: dyslexia, social cognition, transparency, modalities

Procedia PDF Downloads 56
123 Band Characterization and Development of Hyperspectral Indices for Retrieving Chlorophyll Content

Authors: Ramandeep Kaur M. Malhi, Prashant K. Srivastava, G.Sandhya Kiran

Abstract:

Quantitative estimates of foliar biochemicals, namely chlorophyll content (CC), serve as key information for the assessment of plant productivity, stress, and the availability of nutrients. This also plays a critical role in predicting the dynamic response of any vegetation to altering climate conditions. The advent of hyperspectral data with an enhanced number of available wavelengths has increased the possibility of acquiring improved information on CC. Retrieval of CC is extensively carried through well known spectral indices derived from hyperspectral data. In the present study, an attempt is made to develop hyperspectral indices by identifying optimum bands for CC estimation in Butea monosperma (Lam.) Taub growing in forests of Shoolpaneshwar Wildlife Sanctuary, Narmada district, Gujarat State, India. 196 narrow bands of EO-1 Hyperion images were screened, and the best optimum wavelength from blue, green, red, and near infrared (NIR) regions were identified based on the coefficient of determination (R²) between band reflectance and laboratory estimated CC. The identified optimum wavelengths were then employed for developing 12 hyperspectral indices. These spectral index values and CC values were then correlated to investigate the relation between laboratory measured CC and spectral indices. Band 15 of blue range and Band 22 of green range, Band 40 of the red region, and Band 79 of NIR region were found to be optimum bands for estimating CC. The optimum band based combinations on hyperspectral data proved to be the most effective indices for quantifying Butea CC with NDVI and TVI identified as the best (R² > 0.7, p < 0.01). The study demonstrated the significance of band characterization in the development of the best hyperspectral indices for the chlorophyll estimation, which can aid in monitoring the vitality of forests.

Keywords: band, characterization, chlorophyll, hyperspectral, indices

Procedia PDF Downloads 123
122 Crop Leaf Area Index (LAI) Inversion and Scale Effect Analysis from Unmanned Aerial Vehicle (UAV)-Based Hyperspectral Data

Authors: Xiaohua Zhu, Lingling Ma, Yongguang Zhao

Abstract:

Leaf Area Index (LAI) is a key structural characteristic of crops and plays a significant role in precision agricultural management and farmland ecosystem modeling. However, LAI retrieved from different resolution data contain a scaling bias due to the spatial heterogeneity and model non-linearity, that is, there is scale effect during multi-scale LAI estimate. In this article, a typical farmland in semi-arid regions of Chinese Inner Mongolia is taken as the study area, based on the combination of PROSPECT model and SAIL model, a multiple dimensional Look-Up-Table (LUT) is generated for multiple crops LAI estimation from unmanned aerial vehicle (UAV) hyperspectral data. Based on Taylor expansion method and computational geometry model, a scale transfer model considering both difference between inter- and intra-class is constructed for scale effect analysis of LAI inversion over inhomogeneous surface. The results indicate that, (1) the LUT method based on classification and parameter sensitive analysis is useful for LAI retrieval of corn, potato, sunflower and melon on the typical farmland, with correlation coefficient R2 of 0.82 and root mean square error RMSE of 0.43m2/m-2. (2) The scale effect of LAI is becoming obvious with the decrease of image resolution, and maximum scale bias is more than 45%. (3) The scale effect of inter-classes is higher than that of intra-class, which can be corrected efficiently by the scale transfer model established based Taylor expansion and Computational geometry. After corrected, the maximum scale bias can be reduced to 1.2%.

Keywords: leaf area index (LAI), scale effect, UAV-based hyperspectral data, look-up-table (LUT), remote sensing

Procedia PDF Downloads 422
121 A Use Case-Oriented Performance Measurement Framework for AI and Big Data Solutions in the Banking Sector

Authors: Yassine Bouzouita, Oumaima Belghith, Cyrine Zitoun, Charles Bonneau

Abstract:

Performance measurement framework (PMF) is an essential tool in any organization to assess the performance of its processes. It guides businesses to stay on track with their objectives and benchmark themselves from the market. With the growing trend of the digital transformation of business processes, led by innovations in artificial intelligence (AI) & Big Data applications, developing a mature system capable of capturing the impact of digital solutions across different industries became a necessity. Based on the conducted research, no such system has been developed in academia nor the industry. In this context, this paper covers a variety of methodologies on performance measurement, overviews the major AI and big data applications in the banking sector, and covers an exhaustive list of relevant metrics. Consequently, this paper is of interest to both researchers and practitioners. From an academic perspective, it offers a comparative analysis of the reviewed performance measurement frameworks. From an industry perspective, it offers exhaustive research, from market leaders, of the major applications of AI and Big Data technologies, across the different departments of an organization. Moreover, it suggests a standardized classification model with a well-defined structure of intelligent digital solutions. The aforementioned classification is mapped to a centralized library that contains an indexed collection of potential metrics for each application. This library is arranged in a manner that facilitates the rapid search and retrieval of relevant metrics. This proposed framework is meant to guide professionals in identifying the most appropriate AI and big data applications that should be adopted. Furthermore, it will help them meet their business objectives through understanding the potential impact of such solutions on the entire organization.

Keywords: AI and Big Data applications, impact assessment, metrics, performance measurement

Procedia PDF Downloads 173
120 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 228
119 Towards Improved Public Information on Industrial Emissions in Italy: Concepts and Specific Issues Associated to the Italian Experience in IPPC Permit Licensing

Authors: C. Mazziotti Gomez de Teran, D. Fiore, B. Cola, A. Fardelli

Abstract:

The present paper summarizes the analysis of the request for consultation of information and data on industrial emissions made publicly available on the web site of the Ministry of Environment, Land and Sea on integrated pollution prevention and control from large industrial installations, the so called “AIA Portal”. However, since also local Competent Authorities have been organizing their own web sites on IPPC permits releasing procedures for public consultation purposes, as a result, a huge amount of information on national industrial plants is already available on internet, although it is usually proposed as textual documentation or images. Thus, it is not possible to access all the relevant information through interoperability systems and also to retrieval relevant information for decision making purposes as well as rising of awareness on environmental issue. Moreover, since in Italy the number of institutional and private subjects involved in the management of the public information on industrial emissions is substantial, the access to the information is provided on internet web sites according to different criteria; thus, at present it is not structurally homogeneous and comparable. To overcome the mentioned difficulties in the case of the Coordinating Committee for the implementation of the Agreement for the industrial area in Taranto and Statte, operating before the IPPC permit granting procedures of the relevant installation located in the area, a big effort was devoted to elaborate and to validate data and information on characterization of soil, ground water aquifer and coastal sea at disposal of different subjects to derive a global perspective for decision making purposes. Thus, the present paper also focuses on main outcomes matured during such experience.

Keywords: public information, emissions into atmosphere, IPPC permits, territorial information systems

Procedia PDF Downloads 245