Search results for: word retrieval
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1058

Search results for: word retrieval

158 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 76
157 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 81
156 Engagement as a Predictor of Student Flourishing in the Online Classroom

Authors: Theresa Veach, Erin Crisp

Abstract:

It has been shown that traditional students flourish as a function of several factors including level of academic challenge, student/faculty interactions, active/collaborative learning, enriching educational experiences, and supportive campus environment. With the increase in demand for remote or online courses, factors that result in academic flourishing in the virtual classroom have become more crucial to understand than ever before. This study seeks to give insight into those factors that impact student learning, overall student wellbeing, and flourishing among college students enrolled in an online program. 4160 unique students participated in the completion of End of Course Survey (EOC) before final grades were released. Quantitative results from the survey are used by program directors as a measure of student satisfaction with both the curriculum and the faculty. In addition, students also submitted narrative comments in an open comment field. No prompts were given for the comment field on the survey. The purpose of this analysis was to report on the qualitative data available with the goal of gaining insight into what matters to students. Survey results from July 1st, 2016 to December 1st, 2016 were compiled into spreadsheet data sets. The analysis approach used involved both key word and phrase searches and reading results to identify patterns in responses and to tally the frequency of those patterns. In total, just over 25,000 comments were included in the analysis. Preliminary results indicate that it is the professor-student relationship, frequency of feedback and overall engagement of both instructors and students that are indicators of flourishing in college programs offered in an online format. This qualitative study supports the notion that college students flourish with regard to 1) education, 2) overall student well-being and 3) program satisfaction when overall engagement of both the instructor and the student is high. Ways to increase engagement in the online college environment were also explored. These include 1) increasing student participation by providing more project-based assignments, 2) interacting with students in meaningful ways that are both high in frequency and in personal content, and 3) allowing students to apply newly acquired knowledge in ways that are meaningful to current life circumstances and future goals.

Keywords: college, engagement, flourishing, online

Procedia PDF Downloads 238
155 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 248
154 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 45
153 Predicting Success and Failure in Drug Development Using Text Analysis

Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev

Abstract:

Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.

Keywords: data analysis, drug development, sentiment analysis, text-mining

Procedia PDF Downloads 126
152 A Consideration of Dialectal and Stylistic Shifts in Literary Translation

Authors: Pushpinder Syal

Abstract:

Literary writing carries the stamp of the current language of its time. In translating such texts, it becomes a challenge to capture such reflections which may be evident at several levels: the level of dialectal use of language by characters in stories, the alterations in syntax as tools of writers’ individual stylistic choices, the insertion of quasi-proverbial and gnomic utterances, and even the level of the pragmatics of narrative discourse. Discourse strategies may differ between earlier and later texts, reflecting changing relationships between narrators and readers in changed cultural and social contexts. This paper is a consideration of these features by an approach that combines historicity with a description, contextualizing language change within a discourse framework. The process of translating a collection of writings of Punjabi literature spanning 100 years was undertaken for this study and it was observed that the factor of the historicity of language was seen to play a role. While intended for contemporary readers, the translation of literature over the span of a century poses the dual challenge of needing to possess both accessibility and immediacy as well as adherence to the 'old world' styles of communicating and narrating. The linguistic changes may be observed in a more obvious sense in the difference of diction and word formation – with evidence of more hybridized and borrowed forms in modern and contemporary writings, as compared to the older writings. The latter not only contain vestiges of proverbs and folk sayings, but are also closer to oral speech styles. These will be presented and analysed in the form of chronological listing and by these means, the social process of translation from orality to written text can be seen as traceable in the above-mentioned works. More subtle and underlying shifts can be seen through the analysis of speech acts and implicatures in the same literature, in which the social relationships underlying language use are evident as discourse systems of belief and understanding. They present distinct shifts in worldview as seen at different points in time. However, some continuities of language and style are also clearly visible, and these aid the translator in putting together a set of thematic links which identify the literature of a region and community, and constitute essential outcomes in the effort to preserve its distinctive nature.

Keywords: cultural change, dialect, historicity, stylistic variation

Procedia PDF Downloads 107
151 [Keynote Speech]: Risk Management during the Rendition Process: Use of Screen-Voice Recordings in Translator Training

Authors: Maggie Hui

Abstract:

Risk management is not a new concept; however, it is an uncharted area as applied to the translation process and translator training. Serving as one of the self-discovery activities in their practicum course, a two-cycle experiment was carried out with a class of 13 MA translation students with an attempt to explore their risk management while translating in a simulated setting that involves translator-client relations. To test the effects of the main variable of translators’ interaction with the simulated clients, the researcher employed control-group translators and two experiment groups (with Group A being the translator in Cycle 1 and the client in Cycle 2, and Group B on the client position in Cycle 1 and the translator position in Cycle 2). Experiment cycle 1 aims to explore if there would be any behavioral difference in risk management between translators with interaction with the simulated clients, i.e. experiment group A, and their counterparts without such interaction, i.e. control group. Design of Cycle 2 concerns the order of playing different roles of the translator and client in the experiment, and provides information to compare behavior of translators of the two experiment groups. Since this is process-oriented research, it is necessary to hypothesize what was happening in the translators’ minds. The researcher made use of a user-friendly screen-voice recording freeware to record subjects’ screen activities, including every word the translator typed and every change they made to the rendition, the websites they browsed and the reference tools they used, in addition to the verbalization of their thoughts throughout the process. The research observes the translation procedures subjects considered and finally adopted, and looks into the justifications for their procedures, in order to interpret their risk management. The qualitative and quantitative results of this study have some implications for translator training: (a) the experience of being a client seems to reinforce the translator’s risk aversion; (b) the use of role-playing simulation can empower students’ learning by enhancing their attitudinal or psycho-physiological competence, interpersonal competence and strategic competence; and (c) the screen-voice recordings serve as a helpful tool for learners to reflect on their rendition processes, i.e. what they performed satisfactorily and unsatisfactorily while translating and what they could do for improvement in future translation tasks.

Keywords: risk management, screen-voice recordings, simulated translator-client relations, translation pedagogy, translation process-oriented research

Procedia PDF Downloads 245
150 Problems in Computational Phylogenetics: The Germano-Italo-Celtic Clade

Authors: Laura Mclean

Abstract:

A recurring point of interest in computational phylogenetic analysis of Indo-European family trees is the inference of a Germano-Italo-Celtic clade in some versions of the trees produced. The presence of this clade in the models is intriguing as there is little evidence for innovations shared among Germanic, Italic, and Celtic, the evidence generally used in the traditional method to construct a subgroup. One source of this unexpected outcome could be the input to the models. The datasets in the various models used so far, for the most part, take as their basis the Swadesh list, a list compiled by Morris Swadesh and then revised several times, containing up to 207 words that he believed were resistant to change among languages. The judgments made by Swadesh for this list, however, were subjective and based on his intuition rather than rigorous analysis. Some scholars used the Swadesh 200 list as the basis for their Indo-European dataset and made cognacy judgements for each of the words on the list. Another dataset is largely based on the Swadesh 207 list as well although the authors include additional lexical and non-lexical data, and they implement ‘split coding’ to deal with cases of polymorphic characters. A different team of scholars uses a different dataset, IECoR, which combines several different lists, one of which is the Swadesh 200 list. In fact, the Swadesh list is used in some form in every study surveyed and each dataset has three words that, when they are coded as cognates, seemingly contribute to the inference of a Germano-Italo-Celtic clade which could happen due to these clades sharing three words among only themselves. These three words are ‘fish’, ‘flower’, and ‘man’ (in the case of ‘man’, one dataset includes Lithuanian in the cognacy coding and removes the word ‘man’ from the screened data). This collection of cognates shared among Germanic, Italic, and Celtic that were deemed important enough to be included on the Swadesh list, without the ability to account for possible reasons for shared cognates that are not shared innovations, gives an impression of affinity between the Germanic, Celtic, and Italic branches without adequate methodological support. However, by changing how cognacy is defined (ie. root cognates, borrowings vs inherited cognates etc.), we will be able to identify whether these three cognates are significant enough to infer a clade for Germanic, Celtic, and Italic. This paper examines the question of what definition of cognacy should be used for phylogenetic datasets by examining the Germano-Italo-Celtic clade as a case study and offers insights into the reconstruction of a Germano-Italo-Celtic clade.

Keywords: historical, computational, Italo-Celtic, Germanic

Procedia PDF Downloads 23
149 Nanotechnology for Flame Retardancy of Thermoset Resins

Authors: Ewa Kicko Walczak, Grazyna Rymarz

Abstract:

In recent years, nanotechnology has been successfully applied for flame retardancy of polymers, in particular for construction materials. The consumption of thermoset resins as a construction polymers materials is approximately over one million tone word wide. Excellent mechanical, relatively high heat and thermal stability of their type of polymers are proven for variety applications, e.g. transportation, electrical, electronic, building part industry. Above applications in addition to the strength and thermal properties also requires -referring to the legal regulation or recommendation - an adequate level of flammability of the materials. This publication present the evaluation was made of effectiveness of flame retardancy of halogen-free hybrid flame retardants(FR) as compounds nitric/phosphorus modifiers that act with nanofillers (nano carbons, organ modified montmorillonite, nano silica, microsphere) in relation to unsaturated polyester/epoxy resins and glass-reinforced on base this resins laminates(GRP) as a final products. The analysis of the fire properties provided proof of effective flame retardancy of the tested composites by defining oxygen indices values (LOI), with the use of thermogravimetric methods (TGA) and combustion head (CH). An analysis of the combustion process with Cone Calorimeter (CC) method included in the first place N/P units and nanofillers with the observed phenomenon of synergic action of compounds. The fine-plates, phase morphology and rheology of composites were assessed by SEM/ TEM analysis. Polymer-matrix glass reinforced laminates with modified resins meet LOI over 30%, reduced in a decrease by 70% HRR (according to CC analysis), positive description of the curves TGA and values CH; no adverse negative impact on mechanical properties. The main objective of our current project is to contribute to the general understanding of the flame retardants mechanism and to investigate the corresponding structure/properties relationships. We confirm that nanotechnology systems are successfully concept for commercialized forms for non-flammable GRP pipe, concrete composites, and flame retardant tunnels constructions.

Keywords: fire retardants, FR, halogen-free FR nanofillers, non-flammable pipe/concrete, thermoset resins

Procedia PDF Downloads 256
148 Modern Wars: States Responsibility

Authors: Lakshmi Chebolu

Abstract:

'War’, the word itself, is so vibrant and handcuffs the entire society. Since the beginning of manhood, the world has been evident in constant struggles. However, along with the growth of communities, relations, on the one hand, and disputes, on the other hand, infinitely increased. When states cannot or will not settle their disputes or differences by means of peaceful agreements, weapons are suddenly made to speak. It does not mean states can engage in war whenever they desire. At an international level, there has been a vast development of the law of war in the 20th century. War, it may be internal or international, in all situations, belligerent actors should follow the principles of warfare. With the advent of technology, the shape of war has changed, and it violates fundamental principles without observing basic norms. Conversely, states' attitudes towards international relationships are also undermined to some extent as state parties are not prioritized the communal interest rather than political or individual interest. In spite of the persistent development of communities, still many people are innocent victims of modern wars. It costs a toll on many lives, liberties, and properties and remains a major obstacle to nations' development. Recent incidents in Afghan are a live example to World Nations. We know that the principles of international law cannot be implemented very strictly on perpetrators due to the lacuna in the international legal system. However, the rules of war are universal in nature. The Geneva Convention, 1949 which are the core element of IHL, has been ratified by all 196 States. In fact, very few international treaties received this much of big support from nations. State’s approach towards Modern International Law, places a heavy burden on States practice towards in implementation of law. Although United Nations Security Council possesses certain powers under ‘Pacific Settlement of Disputes’, (Chapter VI) of the United Nations Charter to prevent disputes in a peaceful manner, conversely, this practice has been overlooked for many years due to political interests, favor, etc. Despite international consensus on the prohibition of war and protection of fundamental freedoms and human dignity, still, often, law has been misused by states’. The recent tendencies trigger questions about states’ willingness towards the implementation of the law. In view of the existing practices of nations, this paper aims to elevate the legal obligations of the international community to save the succeeding generations from the scourge of modern war practices.

Keywords: modern wars, weapons, prohibition and suspension of war activities, states’ obligations

Procedia PDF Downloads 53
147 Simulated Translator-Client Relations in Translator Training: Translator Behavior around Risk Management

Authors: Maggie Hui

Abstract:

Risk management is not a new concept; however, it is an uncharted area as applied to the translation process and translator training. Risk managers are responsible for managing risk, i.e. adopting strategies with the intention to minimize loss and maximize gains in spite of uncertainty. Which risk strategy to use often depends on the frequency of an event (i.e. probability) and the severity of its outcomes (i.e. impact). This is basically the way translation/localization project managers handle risk management. Although risk management could involve both positive and negative impacts, impact seems to be always negative in professional translators’ management models, e.g. how many days of project time are lost or how many clients are lost. However, for analysis of translation performance, the impact should be possibly positive (e.g. increased readability of the translation) or negative (e.g. loss of source-text information). In other words, the straight business model of risk management is not directly applicable to the study of risk management in the rendition process. This research aims to explore trainee translators’ risk managing while translating in a simulated setting that involves translator-client relations. A two-cycle experiment involving two roles, the translator and the simulated client, was carried out with a class of translation students to test the effects of the main variable of peer-group interaction. The researcher made use of a user-friendly screen-voice recording freeware to record subjects’ screen activities, including every word the translator typed and every change they made to the rendition, the websites they browsed and the reference tools they used, in addition to the verbalization of their thoughts throughout the process. The research observes the translation procedures subjects considered and finally adopted, and looks into the justifications for their procedures, in order to interpret their risk management. The qualitative and quantitative results of this study have some implications for translator training: (a) the experience of being a client seems to reinforce the translator’s risk aversion; (b) there is a wide gap between the translator’s internal risk management and their external presentation of risk; and (c) the use of role-playing simulation can empower students’ learning by enhancing their attitudinal or psycho-physiological competence, interpersonal competence and strategic competence.

Keywords: risk management, role-playing simulation, translation pedagogy, translator-client relations

Procedia PDF Downloads 238
146 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 121
145 A Comparative Study on Vowel Articulation in Malayalam Speaking Children Using Cochlear Implant

Authors: Deepthy Ann Joy, N. Sreedevi

Abstract:

Hearing impairment (HI) at an early age, identified before the onset of language development can reduce the negative effect on speech and language development of children. Early rehabilitation is very important in the improvement of speech production in children with HI. Other than conventional hearing aids, Cochlear Implants are being used in the rehabilitation of children with HI. However, delay in acquisition of speech and language milestones persist in children with Cochlear Implant (CI). Delay in speech milestones are reflected through speech sound errors. These errors reflect the temporal and spectral characteristics of speech. Hence, acoustical analysis of the speech sounds will provide a better representation of speech production skills in children with CI. The present study aimed at investigating the acoustic characteristics of vowels in Malayalam speaking children with a cochlear implant. The participants of the study consisted of 20 Malayalam speaking children in the age range of four and seven years. The experimental group consisted of 10 children with CI, and the control group consisted of 10 typically developing children. Acoustic analysis was carried out for 5 short (/a/, /i/, /u/, /e/, /o/) and 5 long vowels (/a:/, /i:/, /u:/, /e:/, /o:/) in word-initial position. The responses were recorded and analyzed for acoustic parameters such as Vowel duration, Ratio of the duration of a short and long vowel, Formant frequencies (F₁ and F₂) and Formant Centralization Ratio (FCR) computed using the formula (F₂u+F₂a+F₁i+F₁u)/(F₂i+F₁a). Findings of the present study indicated that the values for vowel duration were higher in experimental group compared to the control group for all the vowels except for /u/. Ratio of duration of short and long vowel was also found to be higher in experimental group compared to control group except for /i/. Further F₁ for all vowels was found to be higher in experimental group with variability noticed in F₂ values. FCR was found be higher in experimental group, indicating vowel centralization. Further, the results of independent t-test revealed no significant difference across the parameters in both the groups. It was found that the spectral and temporal measures in children with CI moved towards normal range. The result emphasizes the significance of early rehabilitation in children with hearing impairment. The role of rehabilitation related aspects are also discussed in detail which can be clinically incorporated for the betterment of speech therapeutic services in children with CI.

Keywords: acoustics, cochlear implant, Malayalam, vowels

Procedia PDF Downloads 119
144 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 44
143 Prosodic Realization of Focus in the Public Speeches Delivered by Spanish Learners of English and English Native Speakers

Authors: Raúl Jiménez Vilches

Abstract:

Native (L1) speakers can mark prosodically one part of an utterance and make it more relevant as opposed to the rest of the constituents. Conversely, non-native (L2) speakers encounter problems when it comes to marking prosodically information structure in English. In fact, the L2 speaker’s choice for the prosodic realization of focus is not so clear and often obscures the intended pragmatic meaning and the communicative value in general. This paper reports some of the findings obtained in an L2 prosodic training course for Spanish learners of English within the context of public speaking. More specifically, it analyses the effects of the course experiment in relation to the non-native production of the tonic syllable to mark focus and compares it with the public speeches delivered by native English speakers. The whole experimental training was executed throughout eighteen input sessions (1,440 minutes total time) and all the sessions took place in the classroom. In particular, the first part of the course provided explicit instruction on the recognition and production of the tonic syllable and how the tonic syllable is used to express focus. The non-native and native oral presentations were acoustically analyzed using Praat software for speech analysis (7,356 words in total). The investigation adopted mixed and embedded methodologies. Quantitative information is needed when measuring acoustically the phonetic realization of focus. Qualitative data such as questionnaires, interviews, and observations were also used to interpret the quantitative data. The embedded experiment design was implemented through the analysis of the public speeches before and after the intervention. Results indicate that, even after the L2 prosodic training course, Spanish learners of English still show some major inconsistencies in marking focus effectively. Although there was occasional improvement regarding the choice for location and word classes, Spanish learners were, in general, far from achieving similar results to the ones obtained by the English native speakers in the two types of focus. The prosodic realization of focus seems to be one of the hardest areas of the English prosodic system to be mastered by Spanish learners. A funded research project is in the process of moving the present classroom-based experiment to an online environment (mobile app) and determining whether there is a more effective focus usage through CAPT (Computer-Assisted Pronunciation) tools.

Keywords: focus, prosody, public speaking, Spanish learners of English

Procedia PDF Downloads 68
142 Evidence of a Negativity Bias in the Keywords of Scientific Papers

Authors: Kseniia Zviagintseva, Brett Buttliere

Abstract:

Science is fundamentally a problem-solving enterprise, and scientists pay more attention to the negative things, that cause them dissonance and negative affective state of uncertainty or contradiction. While this is agreed upon by philosophers of science, there are few empirical demonstrations. Here we examine the keywords from those papers published by PLoS in 2014 and show with several sentiment analyzers that negative keywords are studied more than positive keywords. Our dataset is the 927,406 keywords of 32,870 scientific articles in all fields published in 2014 by the journal PLOS ONE (collected from Altmetric.com). Counting how often the 47,415 unique keywords are used, we can examine whether those negative topics are studied more than positive. In order to find the sentiment of the keywords, we utilized two sentiment analysis tools, Hu and Liu (2004) and SentiStrength (2014). The results below are for Hu and Liu as these are the less convincing results. The average keyword was utilized 19.56 times, with half of the keywords being utilized only 1 time and the maximum number of uses being 18,589 times. The keywords identified as negative were utilized 37.39 times, on average, with the positive keywords being utilized 14.72 times and the neutral keywords - 19.29, on average. This difference is only marginally significant, with an F value of 2.82, with a p of .05, but one must keep in mind that more than half of the keywords are utilized only 1 time, artificially increasing the variance and driving the effect size down. To examine more closely, we looked at those top 25 most utilized keywords that have a sentiment. Among the top 25, there are only two positive words, ‘care’ and ‘dynamics’, in position numbers 5 and 13 respectively, with all the rest being identified as negative. ‘Diseases’ is the most studied keyword with 8,790 uses, with ‘cancer’ and ‘infectious’ being the second and fourth most utilized sentiment-laden keywords. The sentiment analysis is not perfect though, as the words ‘diseases’ and ‘disease’ are split by taking 1st and 3rd positions. Combining them, they remain as the most common sentiment-laden keyword, being utilized 13,236 times. More than just splitting the words, the sentiment analyzer logs ‘regression’ and ‘rat’ as negative, and these should probably be considered false positives. Despite these potential problems, the effect is apparent, as even the positive keywords like ‘care’ could or should be considered negative, since this word is most commonly utilized as a part of ‘health care’, ‘critical care’ or ‘quality of care’ and generally associated with how to improve it. All in all, the results suggest that negative concepts are studied more, also providing support for the notion that science is most generally a problem-solving enterprise. The results also provide evidence that negativity and contradiction are related to greater productivity and positive outcomes.

Keywords: bibliometrics, keywords analysis, negativity bias, positive and negative words, scientific papers, scientometrics

Procedia PDF Downloads 163
141 Avoidance and Selectivity in the Acquisition of Arabic as a Second/Foreign Language

Authors: Abeer Heider

Abstract:

This paper explores and classifies the different kinds of avoidances that students commonly make in the acquisition of Arabic as a second/foreign language, and suggests specific strategies to help students lessen their avoidance trends in hopes of streamlining the learning process. Students most commonly use avoidance strategies in grammar, and word choice. These different types of strategies have different implications and naturally require different approaches. Thus the question remains as to the most effective way to help students improve their Arabic, and how teachers can efficiently utilize these techniques. It is hoped that this research will contribute to understand the role of avoidance in the field of the second language acquisition in general, and as a type of input. Yet some researchers also note that similarity between L1 and L2 may be problematic as well since the learner may doubt that such similarity indeed exists and consequently avoid the identical constructions or elements (Jordens, 1977; Kellermann, 1977, 1978, 1986). In an effort to resolve this issue, a case study is being conducted. The present case study attempts to provide a broader analysis of what is acquired than is usually the case, analyzing the learners ‘accomplishments in terms of three –part framework of the components of communicative competence suggested by Michele Canale: grammatical competence, sociolinguistic competence and discourse competence. The subjects of this study are 15 students’ 22th year who came to study Arabic at Qatar University of Cairo. The 15 students are in the advanced level. They were complete intermediate level in Arabic when they arrive in Qatar for the first time. The study used discourse analytic method to examine how the first language affects students’ production and output in the second language, and how and when students use avoidance methods in their learning. The study will be conducted through Fall 2015 through analyzing audio recordings that are recorded throughout the entire semester. The recordings will be around 30 clips. The students are using supplementary listening and speaking materials. The group will be tested at the end of the term to assess any measurable difference between the techniques. Questionnaires will be administered to teachers and students before and after the semester to assess any change in attitude toward avoidance and selectivity methods. Responses to these questionnaires are analyzed and discussed to assess the relative merits of the aforementioned strategies to avoidance and selectivity to further support on. Implications and recommendations for teacher training are proposed.

Keywords: the second language acquisition, learning languages, selectivity, avoidance

Procedia PDF Downloads 261
140 Revolution and Political Opposition in Contemporary Arabic Poetry: A Thematic Study of Two Poems by Muzaffar Al-Nawwab

Authors: Nasser Y. Athamneh

Abstract:

Muzaffar al-Nawwab (1934--) is a modern Iraqi poet, critic, and painter, well-known to Arab youth of the second half of the 20th century for his revolutionary spirit and political activism. For the greater part of his relatively long life, al-Nawwab was wanted 'dead or alive,' so to speak, by most of the Arab regimes and authorities due to his scathing, and at times unsparingly obscene attacks on them. Hence it is that the Arab masses found in his poetry the rebellious expression of their own anger and frustration, stifled by fear for their physical safety. Thus, al-Nawwab’s contemporary Arab audience loved and embraced him both as an Arab exile and as a poet. They memorized and celebrated his poems and transmitted them secretly by word of mouth and on compact cassette tapes. He himself recited his own poetry and had it recorded on compact cassette tapes for fans to smuggle from one Arab country to the other. The themes of al-Nawwab’s poems are varied, but the most predominant among them is political opposition. In most of his poems, al-Nawwab takes up politics as the major theme. Yet, he often represents it coupled with the leitmotifs of women and wine. Indeed he oscillates almost systematically between political commitment to the revolutionary cause of the masses of his nation and homeland on the one hand and love for women and wine on the other. For the persona in al-Nawwab’s poetry, love-longing for the woman and devotion to the cause of revolution and Pan-Arabism are interrelated; each of them readily evokes the other. In this paper, an attempt is made at investigating the treatment and representation of the theme of revolution and political opposition in some of al-Nawwab’s poems. This investigation will be conducted through close reading and textual analysis of representative sections of the poetic texts under consideration in the paper. The primary texts for the study are selected passages from two representative poems, namely, 'The Night Song of the Bow Strings' (Watariyyaat Layliyyah) and 'In Wine and Sorrow My Heart [Is Immersed]' (bil-khamri wa bil-huzni fu’aady). Other poems and extracts from al-Nawwab’s poetic works will be drawn upon as secondary texts to clarify the arguments in the paper and support its thesis. The discussions and textual analysis of the texts under consideration are meant to show that revolution and undaunted political opposition is a predominant theme in al-Nawwab’s poetry, often represented through the use of the leitmotifs of women and wine.

Keywords: Arabic poetry, Muzaffar al-Nawwab, politics, revolution

Procedia PDF Downloads 115
139 GIS Technology for Environmentally Polluted Sites with Innovative Process to Improve the Quality and Assesses the Environmental Impact Assessment (EIA)

Authors: Hamad Almebayedh, Chuxia Lin, Yu wang

Abstract:

The environmental impact assessment (EIA) must be improved, assessed, and quality checked for human and environmental health and safety. Soil contamination is expanding, and sites and soil remediation activities proceeding around the word which simplifies the answer “quality soil characterization” will lead to “quality EIA” to illuminate the contamination level and extent and reveal the unknown for the way forward to remediate, countifying, containing, minimizing and eliminating the environmental damage. Spatial interpolation methods play a significant role in decision making, planning remediation strategies, environmental management, and risk assessment, as it provides essential elements towards site characterization, which need to be informed into the EIA. The Innovative 3D soil mapping and soil characterization technology presented in this research paper reveal the unknown information and the extent of the contaminated soil in specific and enhance soil characterization information in general which will be reflected in improving the information provided in developing the EIA related to specific sites. The foremost aims of this research paper are to present novel 3D mapping technology to quality and cost-effectively characterize and estimate the distribution of key soil characteristics in contaminated sites and develop Innovative process/procedure “assessment measures” for EIA quality and assessment. The contaminated site and field investigation was conducted by innovative 3D mapping technology to characterize the composition of petroleum hydrocarbons contaminated soils in a decommissioned oilfield waste pit in Kuwait. The results show the depth and extent of the contamination, which has been interred into a developed assessment process and procedure for the EIA quality review checklist to enhance the EIA and drive remediation and risk assessment strategies. We have concluded that to minimize the possible adverse environmental impacts on the investigated site in Kuwait, the soil-capping approach may be sufficient and may represent a cost-effective management option as the environmental risk from the contaminated soils is considered to be relatively low. This research paper adopts a multi-method approach involving reviewing the existing literature related to the research area, case studies, and computer simulation.

Keywords: quality EIA, spatial interpolation, soil characterization, contaminated site

Procedia PDF Downloads 63
138 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance

Authors: Rahul Bishnoi

Abstract:

Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.

Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre

Procedia PDF Downloads 77
137 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 233
136 The Effects of English Contractions on the Application of Syntactic Theories

Authors: Wakkai Hosanna Hussaini

Abstract:

A formal structure of the English clause is composed of at least two elements – subject and verb, in structural grammar and at least one element – predicate, in systemic (functional) and generative grammars. Each of the elements can be represented by a word or group (of words). In modern English structure, very often speakers merge two words as one with the use of an apostrophe. Each of the two words can come from different elements or belong to the same element. In either case, result of the merger is called contraction. Although contractions constitute a part of modern English structure, they are considered informal in nature (more frequently used in spoken than written English) that is why they were initially viewed as constituting an evidence of language deterioration. To our knowledge, no formal syntactic theory yet has been particular on the contractions because of its deviation from the formal rules of syntax that seek to identify the elements that form a clause in English. The inconsistency between the formal rules and a contraction is established when two words representing two elements in a non-contraction are merged as one element to form a contraction. Thus the paper presents the various syntactic issues as effects arising from converting non-contracted to contracted forms. It categorizes English contractions and describes each category according to its syntactic relations (position and relationship) and morphological formation (form and content) as integral part of modern structure of English. This is a position paper as such the methodology is observational, descriptive and explanatory/analytical based on existing related literature. The inventory of English contractions contained in books on syntax forms the data from where specific examples are drawn. It is noted as conclusion that the existing syntactic theories were not originally established to account for English contractions. The paper, when published, will further expose the inadequacies of the existing syntactic theories by giving more reasons for the establishment of a more comprehensive syntactic theory for analyzing English clause/sentence structure involving contractions. The method used reveals the extent of the inadequacies in applying the three major syntactic theories: structural, systemic (functional) and generative, on the English contractions. Although no theory is without scope, shying away from the three major theories from recognizing the English contractions need to be broken because of the increasing popularity of its use in modern English structure. The paper, therefore, recommends that as use of contraction gains more popular even in formal speeches today, there is need to establish a syntactic theory to handle its patterns of syntactic relations and morphological formation.

Keywords: application, effects, English contractions, syntactic theories

Procedia PDF Downloads 230
135 Developing Commitment to Change in Egyptian Modern Bureaucracies

Authors: Nada Basset

Abstract:

Purpose: To examine the nature of the civil service sector as an employer through identifying the likely ways to develop employees’ commitment towards change in the civil service sector. Design/Methodology/Approach: a qualitative research approach was followed. Data was collected via a triangulation of interviews, non-participant observation and archival documents analysis. Non-probability sampling took place with a case-study method applied on a sample of 33 civil servants working in the Egyptian Ministry of State for Administrative Development (MSAD) which is the civil service entity acting as the change agent responsible for managing the government administrative reforms plan in the civil service sector. All study participants were actually working in one of the change projects/programmes and had a minimum of 12 months of service in the civil service. Interviews were digitally recorded and transcribed in the form of MS-Word documents, and data transcripts were analyzed manually using MS-Excel worksheets and main research themes were developed and statistics drawn using those Excel worksheets. Findings: The results demonstrate that developing the civil servant’s commitment towards change may require a number of suggested solutions like (1) employee involvement and participation in the planning and implementation processes, (2) linking the employee support to change to some tangible rewards and incentives, (3) appointing some inspirational change leaders that should act as role models, and (4) as a last resort, enforcing employee’s commitment towards change by coercion and authoritarianism. Practical Implications: it is clear that civil servants’ lack of organizational commitment is not directly related to their level of commitment towards change. The research findings showed that civil servants’ commitment towards change can be raised and promoted by getting them involved in the planning and implementation processes, as this develops some sense of belongingness and ownership, thus there is a fair chance that low organizationally committed civil servants can develop high commitment towards change; given they are provided a favorable environment where they are invited to participate and get involved into the move of change. Originality/Value: the research addresses a relatively new area of ‘developing organizational commitment in modern bureaucracies’ by virtue of investigating the levels of civil servants’ commitment towards their jobs and/or organizations -on one hand- and suggesting different ways of developing their commitment towards administrative reform and change initiatives in the Egyptian civil service sector.

Keywords: change, commitment, Egypt, bureaucracy

Procedia PDF Downloads 455
134 Between the ‘Principle of Hope’ and ‘Spiritual Booze’: An Analysis of Religious Themes in the Language Used by the Russian Marxists

Authors: George Bocean

Abstract:

In the mainstream academic spheres of thought, there is a tendency to associate the writings of Russian Marxists as being constantly against the practice of religion itself. Such arguments mainly stem from how the attitude of the Russian Marxists, specifically the Bolsheviks, towards the concept of religion supposedly originates from its own Marxist ideology. Although Marxism is critical of religion as an institution, the approach that Marxism would have on the question of religion is not as clear. Such aspect is specifically observed in the use of language of major leading Russian Marxist figures, such as Lenin and Trotsky, throughout the early 20th century, where the use of religious metaphors was widely used in their philosophical writings and speeches, as well as in propaganda posters of general left-wing movements in Russia as a whole. The methodology of the research will consist of a sociolinguistic and sociology of language approach within a sociohistorical framework of late Tsarist and early Soviet Russia, 1905-1926. The purpose of such approaches are not simply to point out the religious metaphors used in the writings and speeches of Marxists in Russia, but rather in order to analyse how the use of such metaphors represent an important socio-political connection with the context of Russia at the time. In other words, the use of religious metaphors was not only more akin to Russian culture at the time, but this also resonated and was more familiar with the conditions of the working class and peasantry. An example in this study can be observed in the writings of Lenin, where the theme of chudo (miracle) is often mentioned in his writings, and such a word is commonly associated with an idealist philosophy rather than a materialist one, which represents a common theme in Russian culture in regards to the principle of hope for a better life. A further and even more obvious example is Trotsky’s writings about how the revolution of 1905 “would be revived”, which not only resonates with the theme of resurrection, but also prophesises the “second coming” of a future revolution. Such metaphors are important in the writings of such authors, as they simultaneously contain Marxist ideas, as well as religious themes. In doing this research, this paper will demonstrate two aspects. Firstly, the paper will analyse the use of the metaphors by Russian Marxists as a whole in regards to a socio-political and ideological perspectives akin to those of Marxism. Secondly, it will also demonstrate the role that such metaphors have in regards to their impact on the left-wing movements within Russia itself, as well as their relation to the working class and peasantry of Russia within the historical context.

Keywords: language and politics, Marxism, Russian history, social history, sociology of language

Procedia PDF Downloads 112
133 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 237
132 Resilience and Urban Transformation: A Review of Recent Interventions in Europe and Turkey

Authors: Bilge Ozel

Abstract:

Cities are high-complex living organisms and are subjects to continuous transformations produced by the stress that derives from changing conditions. Today the metropolises are seen like “development engines” of the countries and accordingly they become the centre of better living conditions that encourages demographic growth which constitutes the main reason of the changes. Indeed, the potential for economic advancement of the cities directly represents the economic status of their countries. The term of “resilience”, which sees the changes as natural processes and represents the flexibility and adaptability of the systems in the face of changing conditions, becomes a key concept for the development of urban transformation policies. The term of “resilience” derives from the Latin word ‘resilire’, which means ‘bounce’, ‘jump back’, refers to the ability of a system to withstand shocks and still maintain the basic characteristics. A resilient system does not only survive the potential risks and threats but also takes advantage of the positive outcomes of the perturbations and ensures adaptation to the new external conditions. When this understanding is taken into the urban context - or rather “urban resilience” - it delineates the capacity of cities to anticipate upcoming shocks and changes without undergoing major alterations in its functional, physical, socio-economic systems. Undoubtedly, the issue of coordinating the urban systems in a “resilient” form is a multidisciplinary and complex process as the cities are multi-layered and dynamic structures. The concept of “urban transformation” is first launched in Europe just after World War II. It has been applied through different methods such as renovation, revitalization, improvement and gentrification. These methods have been in continuous advancement by acquiring new meanings and trends over years. With the effects of neoliberal policies in the 1980s, the concept of urban transformation has been associated with economic objectives. Subsequently this understanding has been improved over time and had new orientations such as providing more social justice and environmental sustainability. The aim of this research is to identify the most applied urban transformation methods in Turkey and its main reasons of being selected. Moreover, investigating the lacking and limiting points of the urban transformation policies in the context of “urban resilience” in a comparative way with European interventions. The emblematic examples, which symbolize the breaking points of the recent evolution of urban transformation concepts in Europe and Turkey, are chosen and reviewed in a critical way.

Keywords: resilience, urban dynamics, urban resilience, urban transformation

Procedia PDF Downloads 244
131 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 100
130 Profiling of the Cell-Cycle Related Genes in Response to Efavirenz, a Non-Nucleoside Reverse Transcriptase Inhibitor in Human Lung Cancer

Authors: Rahaba Marima, Clement Penny

Abstract:

The Health-related quality of life (HRQoL) for HIV positive patients has improved since the introduction of the highly active antiretroviral treatment (HAART). However, in the present HAART era, HIV co-morbidities such as lung cancer, a non-AIDS (NAIDS) defining cancer have been documented to be on the rise. Under normal physiological conditions, cells grow, repair and proliferate through the cell-cycle as cellular homeostasis is important in the maintenance and proper regulation of tissues and organs. Contrarily, the deregulation of the cell-cycle is a hallmark of cancer, including lung cancer. The association between lung cancer and the use of HAART components such as Efavirenz (EFV) is poorly understood. This study aimed at elucidating the effects of EFV on the cell-cycle genes’ expression in lung cancer. For this purpose, the human cell-cycle gene array composed of 84 genes was evaluated on both normal lung fibroblasts (MRC-5) cells and adenocarcinoma (A549) lung cells, in response to 13µM EFV or 0.01% vehicle. The ±2 up or down fold change was used as a basis of target selection, with p < 0.05. Additionally, RT-qPCR was done to validate the gene array results. Next, In-silico bio-informatics tools, Search Tool for the Retrieval of Interacting Genes/Proteins (STRING), Reactome, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway and Ingenuity Pathway Analysis (IPA) were used for gene/gene interaction studies as well as to map the molecular and biological pathways influenced by the identified targets. Interestingly, the DNA damage response (DDR) pathway genes such as p53, Ataxia telangiectasia mutated and Rad3 related (ATR), Growth arrest and DNA damage inducible alpha (GADD45A), HUS1 checkpoint homolog (HUS1) and Role of radiation (RAD) genes were shown to be upregulated following EFV treatment, as revealed by STRING analysis. Additionally, functional enrichment analysis by the KEGG pathway revealed that most of the differentially expressed gene targets function at the cell-cycle checkpoint such as p21, Aurora kinase B (AURKB) and Mitotic Arrest Deficient-Like 2 (MAD2L2). Core analysis by IPA revealed that p53 downstream targets such as survivin, Bcl2, and cyclin/cyclin dependent kinases (CDKs) complexes are down-regulated, following exposure to EFV. Furthermore, Reactome analysis showed a significant increase in cellular response to stress genes, DNA repair genes, and apoptosis genes, as observed in both normal and cancerous cells. These findings implicate the genotoxic effects of EFV on lung cells, provoking the DDR pathway. Notably, the constitutive expression of this pathway (DDR) often leads to uncontrolled cell proliferation and eventually tumourigenesis, which could be the attribute of HAART components’ (such as EFV) effect on human cancers. Targeting the cell-cycle and its regulation holds a promising therapeutic intervention to the potential HAART associated carcinogenesis, particularly lung cancer.

Keywords: cell-cycle, DNA damage response, Efavirenz, lung cancer

Procedia PDF Downloads 123
129 Effect of Organics on Radionuclide Partitioning in Nuclear Fuel Storage Ponds

Authors: Hollie Ashworth, Sarah Heath, Nick Bryan, Liam Abrahamsen, Simon Kellet

Abstract:

Sellafield has a number of fuel storage ponds, some of which have been open to the air for a number of decades. This has caused corrosion of the fuel resulting in a release of some activity into solution, reduced water clarity, and accumulation of sludge at the bottom of the pond consisting of brucite (Mg(OH)2) and other uranium corrosion products. Both of these phases are also present as colloidal material. 90Sr and 137Cs are known to constitute a small volume of the radionuclides present in the pond, but a large fraction of the activity, thus they are most at risk of challenging effluent discharge limits. Organic molecules are known to be present also, due to the ponds being open to the air, with occasional algal blooms restricting visibility further. The contents of the pond need to be retrieved and safely stored, but dealing with such a complex, undefined inventory poses a unique challenge. This work aims to determine and understand the sorption-desorption interactions of 90Sr and 137Cs to brucite and uranium phases, with and without the presence of organic molecules from chemical degradation and bio-organisms. The influence of organics on these interactions has not been widely studied. Partitioning of these radionuclides and organic molecules has been determined through LSC, ICP-AES/MS, and UV-vis spectrophotometry coupled with ultrafiltration in both binary and ternary systems. Further detailed analysis into the surface and bonding environment of these components is being investigated through XAS techniques and PHREEQC modelling. Experiments were conducted in CO2-free or N2 atmosphere across a high pH range in order to best simulate conditions in the pond. Humic acid used in brucite systems demonstrated strong competition against 90Sr for the brucite surface regardless of the order of addition of components. Variance of pH did have a small effect, however this range (10.5-11.5) is close to the pHpzc of brucite, causing the surface to buffer the solution pH towards that value over the course of the experiment. Sorption of 90Sr to UO2 obeyed Ho’s rate equation and demonstrated a slow second-order reaction with respect to the sharing of valence electrons from the strontium atom, with the initial rate clearly dependent on pH, with the equilibrium concentration calculated at close to 100% sorption. There was no influence of humic acid seen when introduced to these systems. Sorption of 137Cs to UO3 was significant, with more than 95% sorbed in just over 24 hours. Again, humic acid showed no influence when introduced into this system. Both brucite and uranium based systems will be studied with the incorporation of cyanobacterial cultures harvested at different stages of growth. Investigation of these systems provides insight into, and understanding of, the effect of organics on radionuclide partitioning to brucite and uranium phases at high pH. The majority of sorption-desorption work for radionuclides has been conducted at neutral to acidic pH values, and mostly without organics. These studies are particularly important for the characterisation of legacy wastes at Sellafield, with a view to their safe retrieval and storage.

Keywords: caesium, legacy wastes, organics, sorption-desorption, strontium, uranium

Procedia PDF Downloads 251