Search results for: text similarity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1891

Search results for: text similarity

511 Improving Second Language Speaking Skills via Video Exchange

Authors: Nami Takase

Abstract:

Computer-mediated-communication allows people to connect and interact with each other as if they were sharing the same space. The current study examined the effects of using video letters (VLs) on the development of second language speaking skills of Common European Framework of Reference for Languages (CEFR) A1 and CEFR B2 level learners of English as a foreign language. Two groups were formed to measure the impact of VLs. The experimental and control groups were given the same topic, and both groups worked with a native English-speaking university student from the United States of America. Students in the experimental group exchanged VLs, and students in the control group used video conferencing. Pre- and post-tests were conducted to examine the effects of each practice mode. The transcribed speech-text data showed that the VL group had improved speech accuracy scores, while the video conferencing group had increased sentence complexity scores. The use of VLs may be more effective for beginner-level learners because they are able to notice their own errors and replay videos to better understand the native speaker’s speech at their own pace. Both the VL and video conferencing groups provided positive feedback regarding their interactions with native speakers. The results showed how different types of computer-mediated communication impacts different areas of language learning and speaking practice and how each of these types of online communication tool is suited to different teaching objectives.

Keywords: computer-assisted-language-learning, computer-mediated-communication, english as a foreign language, speaking

Procedia PDF Downloads 72
510 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 142
509 Unpacking Chilean Preservice Teachers’ Beliefs on Practicum Experiences through Digital Stories

Authors: Claudio Díaz, Mabel Ortiz

Abstract:

An EFL teacher education programme in Chile takes five years to train a future teacher of English. Preservice teachers are prepared to learn an advanced level of English and teach the language from 5th to 12th grade in the Chilean educational system. In the context of their first EFL Methodology course in year four, preservice teachers have to create a five-minute digital story that starts from a critical incident they have experienced as teachers-to-be during their observations or interventions in the schools. A critical incident can be defined as a happening, a specific incident or event either observed by them or involving them. The happening sparks their thinking and may make them subsequently think differently about the particular event. When they create their digital stories, preservice teachers put technology, teaching practice and theory together to narrate a story that is complemented by still images, moving images, text, sound effects and music. The story should be told as a personal narrative, which explains the critical incident. This presentation will focus on the creation process of 50 Chilean preservice teachers’ digital stories highlighting the critical incidents they started their stories. It will also unpack preservice teachers’ beliefs and reflections when approaching their teaching practices in schools. These beliefs will be coded and categorized through content analysis to evidence preservice teachers’ most rooted conceptions about English teaching and learning in Chilean schools. The findings seem to indicate that preservice teachers’ beliefs are strongly mediated by contextual and affective factors.

Keywords: beliefs, digital stories, preservice teachers, practicum

Procedia PDF Downloads 410
508 Isolation and Characterization of the First Known Inhibitor Cystine Knot Peptide in Sea Anemone: Inhibitory Activity on Acid-Sensing Ion Channels

Authors: Armando A. Rodríguez, Emilio Salceda, Anoland Garateix, André J. Zaharenko, Steve Peigneur, Omar López, Tirso Pons, Michael Richardson, Maylín Díaz, Yasnay Hernández, Ludger Ständker, Jan Tytgat, Enrique Soto

Abstract:

Acid-sensing ion channels are cation (Na+) channels activated by a pH drop. These proteins belong to the ENaC/degenerin superfamily of sodium channels. ASICs are involved in sensory perception, synaptic plasticity, learning, memory formation, cell migration and proliferation, nociception, and neurodegenerative disorders, among other processes; therefore those molecules that specifically target these channels are of growing pharmacological and biomedical interest. Sea anemones produce a large variety of ion channels peptide toxins; however, those acting on ligand-gated ion channels, such as Glu-gated, Ach-gated ion channels, and acid-sensing ion channels (ASICs), remain barely explored. The peptide PhcrTx1 is the first compound characterized from the sea anemone Phymanthus crucifer, and it constitutes a novel ASIC inhibitor. This peptide was purified by chromatographic techniques and pharmacologically characterized on acid-sensing ion channels of mammalian neurons using patch-clamp techniques. PhcrTx1 inhibited ASIC currents with an IC50 of 100 nM. Edman degradation yielded a sequence of 32 amino acids residues, with a molecular mass of 3477 Da by MALDI-TOF. No similarity to known sea anemone peptides was found in protein databases. The computational analysis of Cys-pattern and secondary structure arrangement suggested that this is a structurally ICK (Inhibitor Cystine Knot)-type peptide, a scaffold that had not been found in sea anemones but in other venomous organisms. These results show that PhcrTx1 represents the first member of a new structural group of sea anemones toxins acting on ASICs. Also, this peptide constitutes a novel template for the development of drugs against pathologies related to ASICs function.

Keywords: animal toxin, inhibitor cystine knot, ion channel, sea anemone

Procedia PDF Downloads 271
507 Information Technology Approaches to Literature Text Analysis

Authors: Ayse Tarhan, Mustafa Ilkan, Mohammad Karimzadeh

Abstract:

Science was considered as part of philosophy in ancient Greece. By the nineteenth century, it was understood that philosophy was very inclusive and that social and human sciences such as literature, history, and psychology should be separated and perceived as an autonomous branch of science. The computer was also first seen as a tool of mathematical science. Over time, computer science has grown by encompassing every area in which technology exists, and its growth compelled the division of computer science into different disciplines, just as philosophy had been divided into different branches of science. Now there is almost no branch of science in which computers are not used. One of the newer autonomous disciplines of computer science is digital humanities, and one of the areas of digital humanities is literature. The material of literature is words, and thanks to the software tools created using computer programming languages, data that a literature researcher would need months to complete, can be achieved quickly and objectively. In this article, three different tools that literary researchers can use in their work will be introduced. These studies were created with the computer programming languages Python and R and brought to the world of literature. The purpose of introducing the aforementioned studies is to set an example for the development of special tools or programs on Ottoman language and literature in the future and to support such initiatives. The first example to be introduced is the Stylometry tool developed with the R language. The other is The Metrical Tool, which is used to measure data in poems and was developed with Python. The latest literature analysis tool in this article is Voyant Tools, which is a multifunctional and easy-to-use tool.

Keywords: DH, literature, information technologies, stylometry, the metrical tool, voyant tools

Procedia PDF Downloads 124
506 Ideology Shift in Political Translation

Authors: Jingsong Ma

Abstract:

In political translation, ideology plays an important role in conveying implications accurately. Ideological collisions can occur in political translation when there existdifferences of political environments embedded in the translingual political texts in both source and target languages. To reach an accurate translationrequires the translatorto understand the ideologies implied in (and often transcending) the texts. This paper explores the conditions, procedure, and purpose of processingideological collision and resolution of such issues in political translation. These points will be elucidated by case studies of translating English and Chinese political texts. First, there are specific political terminologies in certain political environments. These terminological peculiarities in one language are often determined by ideological elements rather than by syntactical and semantical understanding. The translation of these ideological-loaded terminologiesis a process and operation consisting of understanding the ideological context, including cultural, historical, and political situations. This will be explained with characteristic Chinese political terminologies and their renderings in English. Second, when the ideology in the source language fails to match with the ideology in the target language, the decisions to highlight or disregard these conflicts are shaped by power relations, political engagement, social context, etc. It thus is necessary to go beyond linguisticanalysis of the context by deciphering ideology in political documents to provide a faithful or equivalent rendering of certain messages. Finally, one of the practical issues is about equivalence in political translation by redefining the notion of faithfulness and retainment of ideological messages in the source language in translations of political texts. To avoid distortion, the translator should be liberated from grip the literal meaning, instead diving into functional meanings of the text.

Keywords: translation, ideology, politics, society

Procedia PDF Downloads 87
505 South Africa’s Post-Apartheid Film Narratives of HIV/AIDS: A Case of ‘Yesterday’

Authors: Moyahabo Molefe

Abstract:

The persistence of HIV/AIDS infection rates in SA has not only been a subject of academic debate but a mediated narrative that has dominated SA’s post-apartheid film space over the last two decades. SA’s colonial geo-spatial architecture still influences migrant labour patterns, which the Oscar-nominated (2003) SA film ‘Yesterday’ has erstwhile reflected upon, yet continues to account for the spread of HIV/AIDS in SA society. Accordingly, men who had left their homes in the rural areas to work in the mines in the cities become infected with HIV/AIDS, only to return home to infect their wives or partners in the rural areas. This paper analyses, through Social Semiotic theory, how SA geo-spatial arrangement had raptured family structures with both men and women taking new residences in the urban areas where they work away from their homes. By using Social semiotic theory, this paper seeks to understand how images and discourses have been deployed in the film ‘Yesterday’ to demonstrate how HIV/AIDS is embedded in the socio-cultural, economic and political architect of SA society. The study uses qualitative approach and content/text/visual semiotic analysis to decipher meanings from array of imagery and discourses/dialogues that are used to mythologise the relationship between the spread of HIV/AIDS and SA migrant labour patterns. The findings of the study are significant to propose a conceptual framework that can be used to mitigate the spread of HIV/AIDS among SA populace, against the backdrop of changing migrant labour patterns and other related factors

Keywords: colonialism, decoloniality, HIV/AIDS, labour migration patterns, social semiotics

Procedia PDF Downloads 40
504 A Qualitative Study of Health-Related Beliefs and Practices among Vegetarians

Authors: Lorena Antonovici, Maria Nicoleta Turliuc

Abstract:

The process of becoming a vegetarian involves changes in several life aspects, including health. Despite its relevance, however, little research has been carried out to analyze vegetarians' self-perceived health, and even less empirical attention has received in the Romanian population. This study aimed to assess health-related beliefs and practices among vegetarian adults in a Romanian sample. We have undertaken 20 semi-structured interviews (10 males, 10 females) based on a snowball sample with a mean age of 31 years. The interview guide was divided into three sections: causes of adopting the diet, general aspects (beliefs, practices, tensions, and conflicts) and consequences of adopting the diet (significant changes, positive aspects, and difficulties, physical and mental health). Additional anamnestic data were reported by means of a questionnaire. Data analyses were performed using Tropes text analysis software (v. 8.2) and SPSS software (v. 24.0.) Findings showed that most of the participants considered a vegetarian diet as a natural and healthy choice as opposed to meat-eating, which is not healthy, and its consumption should be moderated among omnivores. A higher proportion of participants (65%) had an average body mass index (BMI), and several women even assumed having certain affections that no longer occur after following a vegetarian diet. Moreover, participants admitted having better moods and mental health status, given their self-contentment with the dietary choice. Relatives were perceived as more skeptical about their practices than others, and especially women had this view. This study provides a valuable insight into health-related beliefs and practices and how a vegetarian diet might interact.

Keywords: beliefs, health, practices, vegetarians

Procedia PDF Downloads 95
503 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 87
502 A Corpus-Based Approach to Understanding Market Access in Fisheries and Aquaculture: A Systematic Literature Review

Authors: Cheryl Marie Cordeiro

Abstract:

Although fisheries and aquaculture studies might seem marginal to international business (IB) studies in general, fisheries and aquaculture IB (FAIB) management is currently facing increasing pressure to meet global demand and consumption for fish in the next coming decades. In part address to this challenge, the purpose of this systematic review of literature (SLR) study is to investigate the use of the term ‘market access’ in its context of use in the generic literature and business sector discourse, in comparison to the more specific literature and discourse in fisheries, aquaculture and seafood. This SLR aims to uncover the knowledge/interest gaps between the academic subject discourses and business sector practices. Corpus driven in methodology and using a triangulation method of three different text analysis software including AntConc, VOSviewer and Web of Science (WoS) analytics, the SLR results indicate a gap in conceptual knowledge and business practices in how ‘market access’ is conceived and used in the context of the pharmaceutical healthcare industry and FAIB research and practice. While it is acknowledged that the product orientation of different business sectors might differ, this SLR study works with the assumption that both business sectors are global in orientation. These business sectors are complex in their operations from product to market. This SLR suggests a conceptual model in understanding the challenges, the potential barriers as well as avenues for solutions to developing market access for FAIB.

Keywords: market access, fisheries and aquaculture, international business, systematic literature review

Procedia PDF Downloads 122
501 Comparing the Sequence and Effectiveness of Teaching the Four Basic Operations and Mathematics in Primary Schools

Authors: Abubakar Sadiq Mensah, Hassan Usman

Abstract:

The study compared the effectiveness of Audition, Multiplication, subtraction and Division (AMSD) and Addition, subtraction, Multiplication and Division (ASMD), sequence of teaching these four basic operations in mathematics to primary one pupil’s in Katsina Local Government, Katsina State. The study determined the sequence that was more effective and mostly adopted by teachers of the operations. One hundred (100) teachers and sixty pupils (60) from primary one were used for the study. The pupils were divided into two equal groups. The researcher taught these operations to each group separately for four weeks (4 weeks). Group one was taught using the ASMD sequence, while group two was taught using ASMD sequence. In order to generate the needed data for the study, questionnaires and tests were administered on the samples. Data collected were analyzed and major findings were arrived at: (i) Two primary mathematics text books were used in all the primary schools in the area; (ii) Each of the textbooks contained the ASMD sequence; (iii) 73% of the teachers sampled adopted the ASMD sequence of teaching these operations; and (iv) Group one of the pupils (taught using AMSD sequence) performed significantly better than their counter parts in group two (taught using AMSD sequence). On the basis of this, the researcher concluded that the AMSD sequence was more effective in teaching the operations than the ASMD sequence. Consequently, the researcher concluded that primary schools teachers, authors of primary mathematics textbooks, and curriculum planner should adopt the AMSD sequence of teaching these operations.

Keywords: matematic, high school, four basic operations, effectiveness of teaching

Procedia PDF Downloads 228
500 An Exploration of The Patterns of Transcendence in Indian and Hopkins’s Aesthetics

Authors: Lima Antony

Abstract:

In G. M. Hopkins’s poetics and aesthetics there is scope for a comparative study with Indian discourses on aesthetics, an area not adequately explored so far. This exploration will enrich the field of comparative study of diverse cultural expressions and their areas of similarity. A comparative study of aesthetic and religious experiences in diverse cultures will open up avenues for the discovery of similarities in self-experiences and their transcendence. Such explorations will reveal similar patterns in aesthetic and religious experiences. The present paper intends to prove this in the theories of Hopkins and Indian aesthetics. From the time of the Vedas Indian sages have believed that aesthetic enjoyment could develop into a spiritual realm. From the Natyasastra of Bharata, Indian aesthetics develops and reaches its culmination in later centuries into a consciousness of union with the mystery of the Ultimate Being, especially in Dhvanaāloka of Anandavardhana and Locana of Abhinavagupta. Dhvanyaloka elaborates the original ideas of rasa (mood or flavor) and dhvani (power of suggestion) in Indian literary theory and aesthetics. Hopkins was successful, like the ancient Indian alankarikas, in creating aesthetically superb patterns at various levels of sound and sense for which he coined the term ‘inscape’. So Hopkins’s aesthetic theory becomes suitable for transcultural comparative study with Indian aesthetics especially the dhvani theories of Anandavardhana and Abhinavagupta. Hopkins’s innovative approach to poetics and his selection of themes are quite suitable for analysis in the light of Indian literary theories. Indian philosophy views the ultimate reality called Brahman, as the 'soul,' or inner essence, of all reality. We see in Hopkins also a search for the essence of things and the chiming of their individuality with the Ultimate Being in multidimensional patterns of sound, sense and ecstatic experience. This search culminates in the realization of a synthesis of the individual self with the Ultimate Being. This is achieved through an act of surrender of the individuality of the self before the Supreme Being. Attempts to reconcile the immanent and transcendent aspects of the Ultimate Being can be traced in the Indian as well as Hopkins’s aesthetics which can contribute to greater understanding and harmony between cultures.

Keywords: Dhvani, Indian aesthetics, transcultural studies, Rasa

Procedia PDF Downloads 123
499 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 79
498 Value Engineering Change Proposal Application in Construction of Road-Building Projects

Authors: Mohammad Mahdi Hajiali

Abstract:

Many of construction projects estimated in Iran have been influenced by the limitations of financial resources. As for Iran, a country that is developing, and to follow this development-oriented approach which many numbers of projects each year run in, if we can reduce the cost of projects by applying a method we will help greatly to minimize the cost of major construction projects and therefore projects will finish faster and more efficiently. One of the components of transportation infrastructure are roads that are considered to have a considerable share of the country budget. In addition, major budget of the related ministry is spending to repair, improve and maintain roads. Value Engineering is a simple and powerful methodology over the past six decades that has been successful in reducing the cost of many projects. Specific solution for using value engineering in the stage of project implementation is called value engineering change proposal (VECP). It was tried in this research to apply VECP in one of the road-building projects in Iran in order to enhance the value of this kind of projects and reduce their cost. In this case study after applying VECP, an idea was raised. It was about use of concrete pavement instead of hot mixed asphalt (HMA) and also using fiber in order to improve concrete pavement performance. VE group team made a decision that for choosing the best alternatives, get expert’s opinions in pavement systems and use Fuzzy TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) for ranking opinions of the experts. Finally, Jointed Plain Concrete Pavement (JPCP) was selected. Group also experimented concrete samples with available fibers in Iran and the results of experiments showed a significant increment in concrete specifications such as flexural strength. In the end, it was shown that by using of fiber-reinforced concrete pavement instead of asphalt pavement, we can achieve a significant saving in cost, time and also increment in quality, durability, and longevity.

Keywords: road-building projects, value engineering change proposal (VECP), Jointed Plain Concrete Pavement (JPCP), Fuzzy TOPSIS, fiber-reinforced concrete

Procedia PDF Downloads 160
497 Machine Learning Strategies for Data Extraction from Unstructured Documents in Financial Services

Authors: Delphine Vendryes, Dushyanth Sekhar, Baojia Tong, Matthew Theisen, Chester Curme

Abstract:

Much of the data that inform the decisions of governments, corporations and individuals are harvested from unstructured documents. Data extraction is defined here as a process that turns non-machine-readable information into a machine-readable format that can be stored, for instance, in a database. In financial services, introducing more automation in data extraction pipelines is a major challenge. Information sought by financial data consumers is often buried within vast bodies of unstructured documents, which have historically required thorough manual extraction. Automated solutions provide faster access to non-machine-readable datasets, in a context where untimely information quickly becomes irrelevant. Data quality standards cannot be compromised, so automation requires high data integrity. This multifaceted task is broken down into smaller steps: ingestion, table parsing (detection and structure recognition), text analysis (entity detection and disambiguation), schema-based record extraction, user feedback incorporation. Selected intermediary steps are phrased as machine learning problems. Solutions leveraging cutting-edge approaches from the fields of computer vision (e.g. table detection) and natural language processing (e.g. entity detection and disambiguation) are proposed.

Keywords: computer vision, entity recognition, finance, information retrieval, machine learning, natural language processing

Procedia PDF Downloads 86
496 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 85
495 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: data mining, digital libraries, digital preservation, file format

Procedia PDF Downloads 472
494 Lignin Phenol Formaldehyde Resole Resin: Synthesis and Characteristics

Authors: Masoumeh Ghorbania, Falk Liebnerb, Hendrikus W.G. van Herwijnenc, Johannes Konnertha

Abstract:

Phenol formaldehyde (PF) resins are widely used as wood adhesives for variety of industrial products such as plywood, laminated veneer lumber and others. Lignin as a main constituent of wood has become well-known as a potential substitute for phenol in PF adhesives because of their structural similarity. During the last decades numerous research approaches have been carried out to substitute phenol with pulping-derived lignin, whereby the lower reactivity of resins synthesized with shares of lignin seem to be one of the major challenges. This work reports about a systematic screening of different types of lignin (plant origin and pulping process) for their suitability to replace phenol in phenolic resins. Lignin from different plant sources (softwood, hardwood and grass) were used, as these should differ significantly in their reactivity towards formaldehyde of their reactive phenolic core units. Additionally a possible influence of the pulping process was addressed by using the different types of lignin from soda, kraft, and organosolv process and various lignosulfonates (sodium, ammonium, calcium, magnesium). To determine the influence of lignin on the adhesive performance beside others the rate of viscosity development, bond strength development of varying hot pressing time and other thermal properties were investigated. To evaluate the performance of the cured end product, a few selected properties were studied at the example of solid wood-adhesive bond joints, compact panels and plywood. As main results it was found that lignin significantly accelerates the viscosity development in adhesive synthesis. Bonding strength development during curing of adhesives decelerated for all lignin types, while this trend was least for pine kraft lignin and spruce sodium lignosulfonate. However, the overall performance of the products prepared with the latter adhesives was able to fulfill main standard requirements, even after exposing the products to harsh environmental conditions. Thus, a potential application can be considered for processes where reactivity is less critical but adhesive cost and product performance is essential.

Keywords: phenol formaldehyde resin, lignin phenol formaldehyde resin, ABES, DSC

Procedia PDF Downloads 210
493 Valence and Arousal-Based Sentiment Analysis: A Comparative Study

Authors: Usama Shahid, Muhammad Zunnurain Hussain

Abstract:

This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.

Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining

Procedia PDF Downloads 56
492 Patient Safety Culture in Brazilian Hospitals from Nurse's Team Perspective

Authors: Carmen Silvia Gabriel, Dsniele Bernardi da Costa, Andrea Bernardes, Sabrina Elias Mikael, Daniele da Silva Ramos

Abstract:

The goal of this quantitative study is to investigate patient safety culture from the perspective of professional from the hospital nursing team.It was conducted in two Brazilian hospitals,.The sample included 282 nurses Data collection occurred in 2013, through the questionnaire Hospital Survey on Patient Safety Culture.Based on the assessment of the dimensions is stressed that, in the dimension teamwork across hospital units, 69.4% of professionals agree that when a lot of work needs to be done quickly, they work together as a team; about the dimension supervisor/ manager expectations and actions promoting safety, 70.2% agree that their supervisor overlooks patient safety problems.Related to organizational learning and continuous improvement, 56.5% agree that there is evaluation of the effectiveness of the changes after its implementation.On hospital management support for patient safety, 52.8% refer that the actions of hospital management show that patient safety is a top priority.On the overall perception of patient safety, 57.2% disagree that patient safety is never compromised due to higher amount of work to be completed.In what refers to feedback and communication about error, 57.7% refer that always and usually receive such information. Relative to communication openness, 42.9% said they never or rarely feel free to question the decisions / actions of their superiors.On frequency of event reporting, 64.7% said often and always notify events with no damages to patients..About teamwork across hospital units is noted similarity between the percentages of agreement and disagreement, as on the item there is a good cooperation among hospital units that need to work together, that indicates 41.4% and 40.5% respectively.Related to adequacy of professionals, 77.8 % disagree on the existence of sufficient amount of employees to do the job, 52.4% agree that shift changes are problematic for patients. On nonpunitive response to errors, 71.7% indicate that when an event is reported it seems that the focus is on the person.On the patient safety grade of the institution, 41.6 % classified it as very good. it is concluded that there are positive points in the safety culture, and some weaknesses as a punitive culture and impaired patient safety due to work overload .

Keywords: quality of health care, health services evaluation, safety culture, patient safety, nursing team

Procedia PDF Downloads 277
491 The Role of Ideophones: Phonological and Morphological Characteristics in Literature

Authors: Cristina Bahón Arnaiz

Abstract:

Many Asian languages, such as Korean and Japanese, are well-known for their wide use of sound symbolic words or ideophones. This is a very particular characteristic which enriches its lexicon hugely. Ideophones are a class of sound symbolic words that utilize sound symbolism to express aspects, states, emotions, or conditions that can be experienced through the senses, such as shape, color, smell, action or movement. Ideophones have very particular characteristics in terms of sound symbolism and morphology, which distinguish them from other words. The phonological characteristics of ideophones are vowel ablaut or vowel gradation and consonant mutation. In the case of Korean, there are light vowels and dark vowels. Depending on the type of vowel that is used, the meaning will slightly change. Consonant mutation, also known as consonant ablaut, contributes to the level of intensity, emphasis, and volume of an expression. In addition to these phonological characteristics, there is one main morphological singularity, which is reduplication and it carries the meaning of continuity, repetition, intensity, emphasis, and plurality. All these characteristics play an important role in both linguistics and literature as they enhance the meaning of what is trying to be expressed with incredible semantic detail, expressiveness, and rhythm. The following study will analyze the ideophones used in a single paragraph of a Korean novel, which add incredible yet subtle detail to the meaning of the words, and advance the expressiveness and rhythm of the text. The results from analyzing one paragraph from a novel, after presenting the phonological and morphological characteristics of Korean ideophones, will evidence the important role that ideophones play in literature. 

Keywords: ideophones, mimetic words, phonomimes, phenomimes, psychomimes, sound symbolism

Procedia PDF Downloads 119
490 Instructional Immediacy Practices in Asynchronous Learning Environment: Tutors' Perspectives

Authors: Samar Alharbi, Yota Dimitriadi

Abstract:

With the exponential growth of information and communication technologies in higher education, new online teaching strategies have become increasingly important for student engagement and learning. In particular, some institutions depend solely on asynchronous e-learning to provide courses for their students. The major challenge facing these institutions is how to improve the quality of teaching and learning in their asynchronous tools. One of the most important methods that can help e-learner to enhance their social learning and social presence in asynchronous learning setting is immediacy. This study explores tutors perceptions of their instructional immediacy practices as part of their communication actions in online learning environments. It was used a mixed-methods design under the umbrella of pragmatic philosophical assumption. The participants included tutors at an educational institution in a Saudi university. The participants were selected with a purposive sampling approach and chose an institution that offered fully online courses to students. The findings of the quantitative data show the importance of teachers’ immediacy practices in an online text-based learning environment. The qualitative data contained three main themes: the tutors’ encouragement of student interaction; their promotion of class participation; and their addressing of the needs of the students. The findings from these mixed methods can provide teachers with insights into instructional designs and strategies that they can adopt in order to use e-immediacy in effective ways, thus improving their students’ online learning experiences.

Keywords: asynchronous e-learning, higher education, immediacy, tutor

Procedia PDF Downloads 177
489 Identifying Enablers and Barriers of Healthcare Knowledge Transfer: A Systematic Review

Authors: Yousuf Nasser Al Khamisi

Abstract:

Purpose: This paper presents a Knowledge Transfer (KT) Framework in healthcare sectors by applying a systematic literature review process to the healthcare organizations domain to identify enablers and barriers of KT in Healthcare. Methods: The paper conducted a systematic literature search of peer-reviewed papers that described key elements of KT using four databases (Medline, Cinahl, Scopus, and Proquest) for a 10-year period (1/1/2008–16/10/2017). The results of the literature review were used to build a conceptual framework of KT in healthcare organizations. The author used a systematic review of the literature, as described by Barbara Kitchenham in Procedures for Performing Systematic Reviews. Findings: The paper highlighted the impacts of using Knowledge Management (KM) concept at a healthcare organization in controlling infectious diseases in hospitals, improving family medicine performance and enhancing quality improvement practices. Moreover, it found that good-coding performance is analytically linked with a knowledge sharing network structure rich in brokerage and hierarchy rather than in density. The unavailability or ignored of the latest evidence on more cost-effective or more efficient delivery approaches leads to increase the healthcare costs and may lead to unintended results. Originality: Search procedure produced 12,093 results, of which 3523 were general articles about KM and KT. The titles and abstracts of these articles had been screened to segregate what is related and what is not. 94 articles identified by the researchers for full-text assessment. The total number of eligible articles after removing un-related articles was 22 articles.

Keywords: healthcare organisation, knowledge management, knowledge transfer, KT framework

Procedia PDF Downloads 114
488 Epigenetic Modifying Potential of Dietary Spices: Link to Cure Complex Diseases

Authors: Jeena Gupta

Abstract:

In the today’s world of pharmaceutical products, one should not forget the healing properties of inexpensive food materials especially spices. They are known to possess hidden pharmaceutical ingredients, imparting them the qualities of being anti-microbial, anti-oxidant, anti-inflammatory and anti-carcinogenic. Further aberrant epigenetic regulatory mechanisms like DNA methylation, histone modifications or altered microRNA expression patterns, which regulates gene expression without changing DNA sequence, contribute significantly in the development of various diseases. Changing lifestyles and diets exert their effect by influencing these epigenetic mechanisms which are thus the target of dietary phytochemicals. Bioactive components of plants have been in use since ages but their potential to reverse epigenetic alterations and prevention against diseases is yet to be explored. Spices being rich repositories of many bioactive constituents are responsible for providing them unique aroma and taste. Some spices like curcuma and garlic have been well evaluated for their epigenetic regulatory potential, but for others, it is largely unknown. We have evaluated the biological activity of phyto-active components of Fennel, Cardamom and Fenugreek by in silico molecular modeling, in vitro and in vivo studies. Ligand-based similarity studies were conducted to identify structurally similar compounds to understand their biological phenomenon. The database searching has been done by using Fenchone from fennel, Sabinene from cardamom and protodioscin from fenugreek as a query molecule in the different small molecule databases. Moreover, the results of the database searching exhibited that these compounds are having potential binding with the different targets found in the Protein Data Bank. Further in addition to being epigenetic modifiers, in vitro study had demonstrated the antimicrobial, antifungal, antioxidant and cytotoxicity protective effects of Fenchone, Sabinene and Protodioscin. To best of our knowledge, such type of studies facilitate the target fishing as well as making the roadmap in drug design and discovery process for identification of novel therapeutics.

Keywords: epigenetics, spices, phytochemicals, fenchone

Procedia PDF Downloads 126
487 Loss of Function of Only One of Two CPR5 Paralogs Causes Resistance Against Rice Yellow Mottle Virus

Authors: Yugander Arra, Florence Auguy, Melissa Stiebner, Sophie Chéron, Michael M. Wudick, Van Schepler-Luu, Sébastien Cunnac, Wolf B. Frommer, Laurence Albar

Abstract:

Rice yellow mottle virus (RYMV) is one of the most important diseases affecting rice in Africa. The most promising strategy to reduce yield losses is the use of highly resistant varieties. The resistance gene RYMV2 is homolog of the Arabidopsis constitutive expression of pathogenesis related protein-5 (AtCPR5) nucleoporin gene. Resistance alleles are originating from African cultivated rice Oryza glaberrima, rarely cultivated, and are characterized by frameshifts or early stop codons, leading to a non-functional or truncated protein. Rice possesses two paralogs of CPR5 and function of these genes are unclear. Here, we evaluated the role of the two rice candidate nucleoporin paralogs OsCPR5.1 (pathogenesis-related gene 5; RYMV2) and OsCPR5.2 by CRISPR/Cas9 genome editing. Despite striking sequence and structural similarity, only loss-of-function of OsCPR5.1 led to full resistance, while loss-of-function oscpr5.2 mutants remained susceptible. Short N-terminal deletions in OsCPR5.1 also did not lead to resistance. In contrast to Atcpr5 mutants, neither OsCPR5.1 nor OsCPR5.2 knock out mutants showed substantial growth defects. Taken together, the candidate nucleoporin OsCPR5.1, but not its close homolog OsCPR5.2, plays a specific role for the susceptibility to RYMV, possibly by impairing the import of viral RNA or protein into the nucleus. Whereas gene introgression from O. glaberrima to high yielding O. sativa varieties is impaired by strong sterility barriers and the negative impact of linkage drag, genome editing of OsCPR5.1, while maintaining OsCPR5.2 activity, thus provides a promising strategy to generate O. sativa elite lines that are resistant to RYMV.

Keywords: CRISPR Cas9, genome editing, knock out mutant, recessive resistance, rice yellow mottle virus

Procedia PDF Downloads 86
486 Writing Hybridized Narratives to Enact Scientific Literacy and the Myth of the Scientific Method

Authors: Ajaz Shaheen, Jawaid Ahmed Siddqui

Abstract:

This world has purely become scientific and technological, and therefore it demands more from our young learners to be more intellectual in learning sciences. A point of concern that is dragging the attention of educationists is that young learners are gradually detaching from science and scientific theory. To deal with this matter, we must arrange such engaging activities that may improve the imaginative skills of our young learners. Our ongoing research program highlights the effects of such activities that demand the learners to interpret scientific information in the form of text they possess. These mixed stories are also known as what we call BioStories. Learners upload their narratives on different websites to let their peers go through their manuscripts. That, as a result, brings more refinement to their works. Moreover, stories allow the learners to read, understand and learn on a broader spectrum. We have conducted separate studies with learners from Grades 6, 9, and 12 that involve case studies and quasi-experimental designs. The conclusion we drew from the analysis of Grade 6 learners was that the alignment of stories helped them become more familiar with the scientific issue. Not only this but also the learners of the respective grade built up their interest in the subject and also developed a clear understanding of related subject topics. On the other hand, results from the 8th and 9th grades study support the argument that learners reflected a positive attitude toward writing scientific information. Lastly, we concluded from the 12th-grade learners that they took pride in their writing skills and built up their strength, determination, and interest. The students became self-conscious as they wrote hybridized scientific narratives in science.

Keywords: BioStories, hybridized writing, scientific literacy, scientific method

Procedia PDF Downloads 53
485 Implementation of Lean Production in Business Enterprises: A Literature-Based Content Analysis of Implementation Procedures

Authors: P. Pötters, A. Marquet, B. Leyendecker

Abstract:

The objective of this paper is to investigate different implementation approaches for the implementation of Lean production in companies. Furthermore, a structured overview of those different approaches is to be made. Therefore, the present work is intended to answer the following research question: What differences and similarities exist between the various systematic approaches and phase models for the implementation of Lean Production? To present various approaches for the implementation of Lean Production discussed in the literature, a qualitative content analysis was conducted. Within the framework of a qualitative survey, a selection of texts dealing with lean production and its introduction was examined. The analysis presents different implementation approaches from the literature, covering the descriptive aspect of the study. The study also provides insights into similarities and differences among the implementation approaches, which are drawn from the analysis of latent text contents and author interpretations. In this study, the focus is on identifying differences and similarities among systemic approaches for implementing Lean Production. The research question takes into account the main object of consideration, objectives pursued, starting point, procedure, and endpoint of the implementation approach. The study defines the concept of Lean Production and presents various approaches described in literature that companies can use to implement Lean Production successfully. The study distinguishes between five systemic implementation approaches and seven phase models to help companies choose the most suitable approach for their implementation project. The findings of this study can contribute to enhancing transparency regarding the existing approaches for implementing Lean Production. This can enable companies to compare and contrast the available implementation approaches and choose the most suitable one for their specific project.

Keywords: implementation, lean production, phase models, systematic approaches

Procedia PDF Downloads 62
484 Identification of Flooding Attack (Zero Day Attack) at Application Layer Using Mathematical Model and Detection Using Correlations

Authors: Hamsini Pulugurtha, V.S. Lakshmi Jagadmaba Paluri

Abstract:

Distributed denial of service attack (DDoS) is one altogether the top-rated cyber threats presently. It runs down the victim server resources like a system of measurement and buffer size by obstructing the server to supply resources to legitimate shoppers. Throughout this text, we tend to tend to propose a mathematical model of DDoS attack; we discuss its relevancy to the choices like inter-arrival time or rate of arrival of the assault customers accessing the server. We tend to tend to further analyze the attack model in context to the exhausting system of measurement and buffer size of the victim server. The projected technique uses an associate in nursing unattended learning technique, self-organizing map, to make the clusters of identical choices. Lastly, the abstract applies mathematical correlation and so the standard likelihood distribution on the clusters and analyses their behaviors to look at a DDoS attack. These systems not exclusively interconnect very little devices exchanging personal data, but to boot essential infrastructures news standing of nuclear facilities. Although this interconnection brings many edges and blessings, it to boot creates new vulnerabilities and threats which might be conversant in mount attacks. In such sophisticated interconnected systems, the power to look at attacks as early as accomplishable is of paramount importance.

Keywords: application attack, bandwidth, buffer correlation, DDoS distribution flooding intrusion layer, normal prevention probability size

Procedia PDF Downloads 192
483 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: block matching, digital evidence, hash list, evaluation of digital evidence

Procedia PDF Downloads 228
482 Functionality of Promotional and Advertising Texts: Pragmatic Implications for English-Arabic Translation

Authors: Jamal Gaber Abdalla

Abstract:

In business promotion and advertising, language is used intentionally to create a powerful influence over people and their behavior. In commercial and marketing activities, the choice of language to convey specific messages with the intention of influencing people is pragmatically important. Design and visual content in promotional and advertising texts also have a great persuasive impact on consumers. It is the functional combination of design, language and visual content that helps people to identify a product or service and remember it. Translating promotional and advertising texts between structurally and culturally different languages, such as English and Arabic, usually involves pragmatic/functional shifts that decide the quality of translation. This study explores some of these shifts in translating promotional and advertising texts between English and Arabic and their implications for translation quality. The study is based on a contrastive analysis of data collected from real samples of English-Arabic translations of promotional and advertising texts. The samples cover different promotional and advertising text types and different business domains. The aim is to identify the most recurrent translation shifts and most used translation approaches/strategies that achieve quality in view of the functional nature of promotional and advertising texts and target language culture conventions. The study shows that linguistic shifts and visual shifts are recurrent in English-Arabic translations of promotional and advertising texts. The study also shows that the most commonly used translation approaches/strategies are functional translation, domestication, communicative translation.

Keywords: advertising, Arabic, English, functional translation, promotion

Procedia PDF Downloads 329