Search results for: text processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4660

Search results for: text processing

4330 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.

Keywords: autism disease, neural network, CPU, GPU, transfer learning

Procedia PDF Downloads 88
4329 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service

Authors: Lai Wenfang

Abstract:

Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.

Keywords: artificial intelligence, natural language processing, machine learning, visualization

Procedia PDF Downloads 149
4328 Information Retrieval for Kafficho Language

Authors: Mareye Zeleke Mekonen

Abstract:

The Kafficho language has distinct issues in information retrieval because of its restricted resources and dearth of standardized methods. In this endeavor, with the cooperation and support of linguists and native speakers, we investigate the creation of information retrieval systems specifically designed for the Kafficho language. The Kafficho information retrieval system allows Kafficho speakers to access information easily in an efficient and effective way. Our objective is to conduct an information retrieval experiment using 220 Kafficho text files, including fifteen sample questions. Tokenization, normalization, stop word removal, stemming, and other data pre-processing chores, together with additional tasks like term weighting, were prerequisites for the vector space model to represent each page and a particular query. The three well-known measurement metrics we used for our word were Precision, Recall, and and F-measure, with values of 87%, 28%, and 35%, respectively. This demonstrates how well the Kaffiho information retrieval system performed well while utilizing the vector space paradigm.

Keywords: Kafficho, information retrieval, stemming, vector space

Procedia PDF Downloads 24
4327 Analysis and Improvement of Efficiency for Food Processing Assembly Lines

Authors: Mehmet Savsar

Abstract:

Several factors affect productivity of Food Processing Assembly Lines (FPAL). Engineers and line managers usually do not recognize some of these factors and underutilize their production/assembly lines. In this paper, a special food processing assembly line is studied in detail, and procedures are presented to illustrate how productivity and efficiency of such lines can be increased. The assembly line considered produces ten different types of freshly prepared salads on the same line, which is called mixed model assembly line. Problems causing delays and inefficiencies on the line are identified. Line balancing and related tools are used to increase line efficiency and minimize balance delays. The procedure and the approach utilized in this paper can be useful for the operation managers and industrial engineers dealing with similar assembly lines in food processing industry.

Keywords: assembly lines, line balancing, production efficiency, bottleneck

Procedia PDF Downloads 357
4326 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 111
4325 Linguistic Analysis of Holy Scriptures: A Comparative Study of Islamic Jurisprudence and the Western Hermeneutical Tradition

Authors: Sana Ammad

Abstract:

The tradition of linguistic analysis in Islam and Christianity has developed independently of each other in lieu of the social developments specific to their historical context. However, recently increasing number of Muslim academics educated in the West have tried to apply the Western tradition of linguistic interpretation to the Qur’anic text while completely disregarding the Islamic linguistic tradition used and developed by the traditional scholars over the centuries. The aim of the paper is to outline the linguistic tools and methods used by the traditional Islamic scholars for the purpose of interpretating the Holy Qur’an and shed light on how they contribute towards a better understanding of the text compared to their Western counterparts. This paper carries out a descriptive-comparative study of the linguistic tools developed and perfected by the traditional scholars in Islam for the purpose of textual analysis of the Qur’an as they have been described in the authentic works of Usul Al Fiqh (Jurisprudence) and the principles of textual analysis employed by the Western hermeneutical tradition for the study of the Bible. First, it briefly outlines the independent historical development of the two traditions emphasizing the final normative shape that they have taken. Then it draws a comparison of the two traditions highlighting the similarities and the differences existing between them. In the end, the paper demonstrates the level of academic excellence achieved by the traditional linguistic scholars in their efforts to develop appropriate tools of textual interpretation and how these tools are more suitable for interpreting the Qur’an compared to the Western principles. Since the aim of interpreters of both the traditions is to try and attain an objective understanding of the Scriptures, the emphasis of the paper shall be to highlight how well the Islamic method of linguistic interpretation contributes to an objective understanding of the Qur’anic text. The paper concludes with the following findings: The Western hermeneutical tradition of linguistic analysis developed within the Western historical context. However, the Islamic method of linguistic analysis is much more highly developed and complex and serves better the purpose of objective understanding of the Holy text.

Keywords: Islamic jurisprudence, linguistic analysis, textual interpretation, western hermeneutics

Procedia PDF Downloads 295
4324 A Survey on Smart Security Mechanism Using Graphical Passwords

Authors: Aboli Dhanavade, Shweta Bhimnath, Rutuja Jumale, Ajay Nadargi

Abstract:

Security to any of our personal thing is our most basic need. It is not possible to directly apply that standard Human-computer—interaction approaches. Important usability goal for authentication system is to support users in selecting best passwords. Users often select text-passwords that are easy to remember, but they are more open for attackers to guess. The human brain is good in remembering pictures rather than textual characters. So the best alternative is being designed that is Graphical passwords. However, Graphical passwords are still immature. Conventional password schemes are also vulnerable to Shoulder-surfing attacks, many shoulder-surfing resistant graphical passwords schemes have been proposed. Next, we have analyzed the security and usability of the proposed scheme, and show the resistance of the proposed scheme to shoulder-surfing and different accidental logins.

Keywords: shoulder-surfing, security, authentication, text-passwords

Procedia PDF Downloads 333
4323 An Ideational Grammatical Metaphor of Narrative History in Chinua Achebe's 'There Was a Country'

Authors: Muhammed-Badar Salihu Jibrin, Chibabi Makedono Darlington

Abstract:

This paper studied Ideational Grammatical Metaphor (IGM) of Narrative History in Chinua Achebe’s There Was a Country. It started with a narrative historical style as a recent genre out of the conventional historical writings. In order to explore the linguistic phenomenon using a particular lexico-grammatical tool of IGM, the theoretical background was examined based on Hallidayan Systemic Functional Linguistics. Furthermore, the study considered the possibility of applying IGM to the Part 4 of Achebe’s historical text with recourse to the concept of congruence in IGM and research questions before formulating a working methodology. The analysis of Achebe’s memoir was, thus, presented in tabular forms to account for the quantitative content analysis with qualitative research technique, as well as the metaphorical and congruent wording through nominalization and process types with samples. The frequencies and percentage were given appropriately with respect to each subheadings of the text. To this end, the findings showed that material and relational types indicated dominance. The discussion and implications were that the findings confirmed earlier study by MAK Halliday and C.I.M.I.M. Matthiessen’s suggestion that IGM should show dominance of material type process. The implication is that IGM can be an effective tool for the analysis of a narrative historical text. In conclusion, it was observed that IGM does not only carry grammatical function but also an ideological role in shaping the historical discourse within the narrative mode between writers and readers.

Keywords: ideational grammatical metaphor, nominalization, narrative history, memoire, dominance

Procedia PDF Downloads 194
4322 Clicking Based Graphical Password Scheme Resistant to Spyware

Authors: Bandar Alahmadi

Abstract:

The fact that people tend to remember pictures better than texts, motivates researchers to develop graphical passwords as an alternative to textual passwords. Graphical passwords as such were introduced as a possible alternative to traditional text passwords, in which users prove their identity by clicking on pictures rather than typing alphanumerical text. In this paper, we present a scheme for graphical passwords that are resistant to shoulder surfing attacks and spyware attacks. The proposed scheme introduces a clicking technique to chosen images. First, the users choose a set of images, the images are then included in a grid where users can click in the cells around each image, the location of the click and the number of clicks are saved. As a result, the proposed scheme can be safe from shoulder surface and spyware attacks.

Keywords: security, password, authentication, attack, applications

Procedia PDF Downloads 141
4321 Readability Facing the Irreducible Otherness: Translation as a Third Dimension toward a Multilingual Higher Education

Authors: Noury Bakrim

Abstract:

From the point of view of language morphodynamics, interpretative Readability of the text-result (the stasis) is not the external hermeneutics of its various potential reading events but the paradigmatic, semantic immanence of its dynamics. In other words, interpretative Readability articulates the potential tension between projection (intentionality of the discursive event) and the result (Readability within the syntagmatic stasis). We then consider that translation represents much more a metalinguistic conversion of neurocognitive bilingual sub-routines and modular relations than a semantic equivalence. Furthermore, the actualizing Readability (the process of rewriting a target text within a target language/genre) builds upon the descriptive level between the generative syntax/semantic from and its paradigmatic potential translatability. Translation corpora reveal the evidence of a certain focusing on the positivist stasis of the source text at the expense of its interpretative Readability. For instance, Fluchere's brilliant translation of Miller's Tropic of cancer into French realizes unconsciously an inversion of the hierarchical relations between Life Thought and Fable: From Life Thought (fable) into Fable (Life Thought). We could regard the translation of Bernard Kreiss basing on Canetti's work die englischen Jahre (les annees anglaises) as another inversion of the historical scale from individual history into Hegelian history. In order to describe and test both translation process and result, we focus on the pedagogical practice which enables various principles grounding in interpretative/actualizing Readability. Henceforth, establishing the analytical uttering dynamics of the source text could be widened by other practices. The reversibility test (target - source text) or the comparison with a second translation in a third language (tertium comparationis A/B and A/C) point out the evidence of an impossible event. Therefore, it doesn't imply an uttering idealistic/absolute source but the irreducible/non-reproducible intentionality of its production event within the experience of world/discourse. The aim of this paper is to conceptualize translation as the tension between interpretative and actualizing Readability in a new approach grounding in morphodynamics of language and Translatability (mainly into French) within literary and non-literary texts articulating theoretical and described pedagogical corpora.

Keywords: readability, translation as deverbalization, translation as conversion, Tertium Comparationis, uttering actualization, translation pedagogy

Procedia PDF Downloads 144
4320 High Level Synthesis of Canny Edge Detection Algorithm on Zynq Platform

Authors: Hanaa M. Abdelgawad, Mona Safar, Ayman M. Wahba

Abstract:

Real-time image and video processing is a demand in many computer vision applications, e.g. video surveillance, traffic management and medical imaging. The processing of those video applications requires high computational power. Therefore, the optimal solution is the collaboration of CPU and hardware accelerators. In this paper, a Canny edge detection hardware accelerator is proposed. Canny edge detection is one of the common blocks in the pre-processing phase of image and video processing pipeline. Our presented approach targets offloading the Canny edge detection algorithm from processing system (PS) to programmable logic (PL) taking the advantage of High Level Synthesis (HLS) tool flow to accelerate the implementation on Zynq platform. The resulting implementation enables up to a 100x performance improvement through hardware acceleration. The CPU utilization drops down and the frame rate jumps to 60 fps of 1080p full HD input video stream.

Keywords: high level synthesis, canny edge detection, hardware accelerators, computer vision

Procedia PDF Downloads 455
4319 Adaptation in Translation of 'Christmas Every Day' Short Story by William Dean Howells

Authors: Mohsine Khazrouni

Abstract:

The present study is an attempt to highlight the importance of adaptation in translation. To convey the message, the translator needs to take into account not only the text but also extra-linguistic factors such as the target audience. The present paper claims that adaptation is an unavoidable translation strategy when dealing with texts that are heavy with religious and cultural themes. The translation task becomes even more challenging when dealing with children’s literature as the audience are children whose comprehension, experience and world knowledge are limited. The study uses the Arabic translation of the short story ‘Christmas Every Day’ as a case study. The short story will be translated, and the pragmatic problems involved will be discussed. The focus will be on the issue of adaptation. i.e., the source text should be adapted to the target language audience`s social and cultural environment.

Keywords: pragmatic adaptation, Arabic translation, children's literature, equivalence

Procedia PDF Downloads 183
4318 Quantum Entangled States and Image Processing

Authors: Sanjay Singh, Sushil Kumar, Rashmi Jain

Abstract:

Quantum registering is another pattern in computational hypothesis and a quantum mechanical framework has a few helpful properties like Entanglement. We plan to store data concerning the structure and substance of a basic picture in a quantum framework. Consider a variety of n qubits which we propose to use as our memory stockpiling. In recent years classical processing is switched to quantum image processing. Quantum image processing is an elegant approach to overcome the problems of its classical counter parts. Image storage, retrieval and its processing on quantum machines is an emerging area. Although quantum machines do not exist in physical reality but theoretical algorithms developed based on quantum entangled states gives new insights to process the classical images in quantum domain. Here in the present work, we give the brief overview, such that how entangled states can be useful for quantum image storage and retrieval. We discuss the properties of tripartite Greenberger-Horne-Zeilinger and W states and their usefulness to store the shapes which may consist three vertices. We also propose the techniques to store shapes having more than three vertices.

Keywords: Greenberger-Horne-Zeilinger, image storage and retrieval, quantum entanglement, W states

Procedia PDF Downloads 279
4317 Vibroacoustic Modulation with Chirp Signal

Authors: Dong Liu

Abstract:

By sending a high-frequency probe wave and a low-frequency pump wave to a specimen, the vibroacoustic method evaluates the defect’s severity according to the modulation index of the received signal. Many studies experimentally proved the significant sensitivity of the modulation index to the tiny contact type defect. However, it has also been found that the modulation index was highly affected by the frequency of probe or pump waves. Therefore, the chirp signal has been introduced to the VAM method since it can assess multiple frequencies in a relatively short time duration, so the robustness of the VAM method could be enhanced. Consequently, the signal processing method needs to be modified accordingly. Various studies utilized different algorithms or combinations of algorithms for processing the VAM signal method by chirp excitation. These signal process methods were compared and used for processing a VAM signal acquired from the steel samples.

Keywords: vibroacoustic modulation, nonlinear acoustic modulation, nonlinear acoustic NDT&E, signal processing, structural health monitoring

Procedia PDF Downloads 73
4316 Performing a Chamber Theatre Adaptation of Nick Joaquin's 'the Summer Solstice'

Authors: Allen B. Baylosis

Abstract:

Chamber Theatre has been one of the least articulated staging devices in the field of theatre and performance studies. This creative exploratory-descriptive study responds to this gap by employing the staging technique in a Chamber Theatre production based on Nick Joaquin’s The Summer Solstice. Specifically, this study opts to understand three processes involved in the Chamber Theatre creative thesis production of The Summer Solstice as performance: performance of the theatre-maker, performance of the spect-actors, and performance of the spectators. For this purpose, the theatre-maker describes the creative process of transforming The Summer Solstice text to a Chamber Theatre production—from text to staging. The theatre-maker also analyzes the performers’ experiences and the spectators’ responses as they participate in a Chamber Theatre performance. In doing so, the theatre-maker collects qualitative data from seventeen (17) performers and qualitative feedback from twenty (20) spectators. For the mode of data analysis, this study employed Ranciere’s concept on the Emancipated Spectator (2008) and Schechner’s Performance Theory (1988). The study’s findings examine how the theatre-maker, the performers, and the spectators become distant viewers of their respective restored behavior performances. Through these viewed performances, this study implies that it is possible to ascertain a reasonable definition of purpose for Chamber Theatre. Hence, despite the existence of other modern staging devices in the field of theatre and performance studies, this study concludes that Chamber Theatre remains to be a relevant staging technique.

Keywords: adaptation of text, chamber theatre, experimental theater, oral interpretation

Procedia PDF Downloads 129
4315 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 473
4314 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 175
4313 Reading against the Grain: Transcodifying Stimulus Meaning

Authors: Aba-Carina Pârlog

Abstract:

On translating, reading against the grain results in a wrong effect in the TL. Quine’s ocular irradiation plays an important part in the process of understanding and translating a text. The various types of textual radiation must be rendered by the translator by paying close attention to the types of field that produce it. The literary work must be seen as an indirect cause of an expressive effect in the TL that is supposed to be similar to the effect it has in the SL. If the adaptive transformative codes are so flexible that they encourage the translator to repeatedly leave out parts of the original work, then a subversive pattern emerges which changes the entire book. In this case, the translator is a writer per se who decides what goes in and out of the book, how the style is to be ciphered and what elements of ideology are to be highlighted. Figurative language must not be flattened for the sake of clarity or naturalness. The missing figurative elements make the translated text less interesting, less challenging and less vivid which reflects poorly on the writer. There is a close connection between style and the writer’s person. If the writer’s style is very much changed in a translation, the translation is useless as the original writer and his / her imaginative world can no longer be discovered. Then, a different writer appears and his / her creation surfaces. Changing meaning considered as a “negative shift” in translation defines one of the faulty transformative codes used by some translators. It is a dangerous tool which leads to adaptations that sometimes reflect the original less than the reader would wish to. It contradicts the very essence of the process of translation which is that of making a work available in a foreign language. Employing speculative aesthetics at the level of a text indicates the wish to create manipulative or subversive effects in the translated work. This is generally achieved by adding new words or connotations, creating new figures of speech or using explicitations. The irradiation patterns of the original work are neglected and the translator creates new meanings, implications, emphases and contexts. Again s/he turns into a new author who enjoys the freedom of expressing his / her ideas without the constraints of the original text. The stimulus meaning of a text is very important for a translator which is why reading against the grain is unadvisable during the process of translation. By paying attention to the waves of the SL input, a faithful literary work is produced which does not contradict general knowledge about foreign cultures and civilizations. Following personal common sense is essential in the field of translation as well as everywhere else.

Keywords: stimulus meaning, substance of expression, transformative code, translation

Procedia PDF Downloads 427
4312 A Religious Book Translation by Pragmatic Approach: The Vajrachedika-Prajna-Paramita Sutra

Authors: Yoon-Cheol Park

Abstract:

This research focuses on examining the Chinese character-Korean language translation of the Vajrachedika-prajna-paramita sutra by a pragmatic approach. The background of this research is that there were no previous researches which looked into the Vajrachedika-prajna-paramita translation by pragmatic approach until now. Even though it is composed of conversational structures between Buddha and his disciple unlike other Buddhist sutras, most of its translation could find the traces to have pursued literal translation and still has now overlooked pragmatic elements in it. Accordingly, it is meaningful to examine the messages through speaker and hearer relation and between speaker intention and utterance meaning. Practically, the Vajrachedika-prajna-paramita sutra includes pragmatic elements, such as speech acts, presupposition, conversational implicature, the cooperative principle and politeness. First, speech acts in its sutra text show the translation to reveal obvious performance meanings of language to the target text. And presupposition in their dialogues is conveyed by paraphrasing or substituting abstruse language with easy expressions. Conversational implicature in utterances makes it possible to understand the meanings of holy words by relying on utterance contexts. In particular, relevance results in an increase of readability in the translation owing to previous utterance contexts. Finally, politeness in the target text is conveyed with natural stylistics through the honorific system of the Korean language. These elements mean that the pragmatic approach can function as a useful device in conveying holy words in a specific, practical and direct way depending on utterance contexts. Therefore, we expect that taking a pragmatic approach in translating the Vajrachedika-prajna-paramita sutra will provide a theoretical foundation for seeking better translation methods than the literal translations of the past. And it implies that the translation of Buddhist sutra needs to convey messages by translation methods which take into account the characteristic of sutra text like the Vajrachedika-prajna-paramita.

Keywords: buddhist sutra, Chinese character-Korean language translation, pragmatic approach, utterance context

Procedia PDF Downloads 381
4311 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 336
4310 Image Making: The Spectacle of Photography and Text in Obituary Programs as Contemporary Practice of Social Visibility in Southern Nigeria

Authors: Soiduate Ogoye-Atanga

Abstract:

During funeral ceremonies, it has become common for attendees to jostle for burial programs in some southern Nigerian towns. Beginning from ordinary typewritten text only sheets of paper in the 1980s to their current digitally formatted multicolor magazine style, burial programs continue to be collected and kept in homes where they remain as archival documents of family photo histories and as a veritable form of leveraging family status and visibility in a social economy through the inclusion of lots of choreographically arranged photographs and text. The biographical texts speak of idealized and often lofty and aestheticized accomplishments of deceased peoples, which are often corroborated by an accompanying section of tributes from first the immediate family members, and then from affiliations as well as organizations deceased people belonged, in the form of scanned letterheaded corporate tributes. Others speak of modest biographical texts when the deceased accomplished little. Usually, in majority of the cases, the display of photographs and text in these programs follow a trajectory of historical compartmentalization of the deceased, beginning from parentage to the period of youth, occupation, retirement, and old age as the case may be, which usually drives from black and white historical photographs to the color photography of today. This compartmentalization follows varied models but is designed to show the deceased in varying activities during his lifetime. The production of these programs ranges from the extremely expensive and luscious full colors of near fifty-eighty pages to bland and very simplified low-quality few-page editions in a single color and no photographs, except on the cover. Cost and quality, therefore, become determinants of varying family status and social visibility. By a critical selection of photographs and text, family members construct an idealized image of deceased people and themselves, concentrating on mutuality based on appropriate sartorial selections, socioeconomic grade, and social temperaments that are framed to corroborate the public’s perception of them. Burial magazines, therefore, serve purposes beyond their primary use; they symbolize an orchestrated social site for image-making and the validation of the social status of families, shaped by prior family histories.

Keywords: biographical texts, burial programs, compartmentalization, magazine, multicolor, photo-histories, social status

Procedia PDF Downloads 166
4309 The Need for Automation in the Domestic Food Processing Sector and its Impact

Authors: Shantam Gupta

Abstract:

The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)

Keywords: automation, food processing, impact on economy, processing individual

Procedia PDF Downloads 447
4308 Development of a Tesla Music Coil from Signal Processing

Authors: Samaniego Campoverde José Enrique, Rosero Muñoz Jorge Enrique, Luzcando Narea Lorena Elizabeth

Abstract:

This paper presents a practical and theoretical model for the operation of the Tesla coil using digital signal processing. The research is based on the analysis of ten scientific papers exploring the development and operation of the Tesla coil. Starting from the Testa coil, several modifications were carried out on the Tesla coil, with the aim of amplifying the digital signal by making use of digital signal processing. To achieve this, an amplifier with a transistor and digital filters provided by MATLAB software were used, which were chosen according to the characteristics of the signals in question.

Keywords: tesla coil, digital signal process, equalizer, graphical environment

Procedia PDF Downloads 84
4307 Text Mining Analysis of the Reconstruction Plans after the Great East Japan Earthquake

Authors: Minami Ito, Akihiro Iijima

Abstract:

On March 11, 2011, the Great East Japan Earthquake occurred off the coast of Sanriku, Japan. It is important to build a sustainable society through the reconstruction process rather than simply restoring the infrastructure. To compare the goals of reconstruction plans of quake-stricken municipalities, Japanese language morphological analysis was performed by using text mining techniques. Frequently-used nouns were sorted into four main categories of “life”, “disaster prevention”, “economy”, and “harmony with environment”. Because Soma City is affected by nuclear accident, sentences tagged to “harmony with environment” tended to be frequent compared to the other municipalities. Results from cluster analysis and principle component analysis clearly indicated that the local government reinforces the efforts to reduce risks from radiation exposure as a top priority.

Keywords: eco-friendly reconstruction, harmony with environment, decontamination, nuclear disaster

Procedia PDF Downloads 197
4306 Synthesis and Characterisation of Bi-Substituted Magnetite Nanoparticles by Mechanochemical Processing (MCP)

Authors: Morteza Mohri Esfahani, Amir S. H. Rozatian, Morteza Mozaffari

Abstract:

Single phase magnetite nanoparticles and Bi-substituted ones were prepared by mechanochemical processing (MCP). The effects of Bi-substitution on the structural and magnetic properties of the nanoparticles were studied by X-ray Diffraction (XRD) and magnetometry techniques, respectively. The XRD results showed that all samples have spinel phase and by increasing Bi content, the main diffraction peaks were shifted to higher angles, which means the lattice parameter decreases from 0.843 to 0.838 nm and then increases to 0.841 nm. Also, the results revealed that increasing Bi content lead to a decrease in saturation magnetization (Ms) from 74.9 to 48.8 emu/g and an increase in coercivity (Hc) from 96.8 to 137.1 Oe.

Keywords: bi-substituted magnetite nanoparticles, mechanochemical processing, X-ray diffraction, magnetism

Procedia PDF Downloads 511
4305 Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

Authors: Hamza Nejib, Okba Taouali

Abstract:

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Keywords: online prediction, KAF, signal processing, RKHS, Kernel methods, KRLS, KLMS

Procedia PDF Downloads 373
4304 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

Authors: Fan Gao, Lior Pachter

Abstract:

The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome

Procedia PDF Downloads 131
4303 Systemic Functional Grammar Analysis of Barack Obama's Second Term Inaugural Speech

Authors: Sadiq Aminu, Ahmed Lamido

Abstract:

This research studies Barack Obama’s second inaugural speech using Halliday’s Systemic Functional Grammar (SFG). SFG is a text grammar which describes how language is used, so that the meaning of the text can be better understood. The primary source of data in this research work is Barack Obama’s second inaugural speech which was obtained from the internet. The analysis of the speech was based on the ideational and textual metafunctions of Systemic Functional Grammar. Specifically, the researcher analyses the Process Types and Participants (ideational) and the Theme/Rheme (textual). It was found that material process (process of doing) was the most frequently used ‘Process type’ and ‘We’ which refers to the people of America was the frequently used ‘Theme’. Application of the SFG theory, therefore, gives a better meaning to Barack Obama’s speech.

Keywords: ideational, metafunction, rheme, textual, theme

Procedia PDF Downloads 130
4302 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications

Authors: Atef A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab

Abstract:

Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronics color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to be main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam attached at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works very accurate under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

Keywords: robotics manipulator, 5-DOF manipulator, image processing, color sorting, pick-and-place

Procedia PDF Downloads 343
4301 Segmentation of Korean Words on Korean Road Signs

Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon

Abstract:

This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.

Keywords: segmentation, road signs, characters, classification

Procedia PDF Downloads 419