Search results for: scene text recognition
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3035

Search results for: scene text recognition

2585 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model

Authors: Dojoon Jung, Kiwoong Moon, Joong Lee

Abstract:

The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.

Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model

Procedia PDF Downloads 225
2584 Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Authors: Anderson Rocha, Pedro Miguel de Figueiredo Dinis Oliveira Gaspar

Abstract:

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Keywords: agricultural mobile robot, image processing, path recognition, hough transform

Procedia PDF Downloads 122
2583 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 43
2582 Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Authors: Shiuh-Jer Huang, Chen-Zon Yan, C. K. Huang, Chun-Chien Ting

Abstract:

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

Keywords: deep learning, image processing, convolution neural network, YOLOv2, 7A6 series manipulator

Procedia PDF Downloads 214
2581 Developing a Model of Teaching Writing Based On Reading Approach through Reflection Strategy for EFL Students of STKIP YPUP

Authors: Eny Syatriana, Ardiansyah

Abstract:

The purpose of recent study was to develop a learning model on writing, based on the reading texts which will be read by the students using reflection strategy. The strategy would allow the students to read the text and then they would write back the main idea and to develop the text by using their own sentences. So, the writing practice was begun by reading an interesting text, then the students would develop the text which has been read into their writing. The problem questions are (1) what kind of learning model that can develop the students writing ability? (2) what is the achievement of the students of STKIP YPUP through reflection strategy? (3) is the using of the strategy effective to develop students competence In writing? (4) in what level are the students interest toward the using of a strategy In writing subject? This development research consisted of some steps, they are (1) need analysis (2) model design (3) implementation (4) model evaluation. The need analysis was applied through discussion among the writing lecturers to create a learning model for writing subject. To see the effectiveness of the model, an experiment would be delivered for one class. The instrument and learning material would be validated by the experts. In every steps of material development, there was a learning process, where would be validated by an expert. The research used development design. These Principles and procedures or research design and development .This study, researcher would do need analysis, creating prototype, content validation, and limited empiric experiment to the sample. In each steps, there should be an assessment and revision to the drafts before continue to the next steps. The second year, the prototype would be tested empirically to four classes in STKIP YPUP for English department. Implementing the test greatly was done through the action research and followed by evaluation and validation from the experts.

Keywords: learning model, reflection, strategy, reading, writing, development

Procedia PDF Downloads 343
2580 Translation of Culture-Specific References in the Turkish Translation of Shakespeare's Macbeth

Authors: Feride Sumbul

Abstract:

Drama is a literary genre that mirrors the people and society and transfers the human nature and life to the reader or the audience within its own social-cultural structure. Each play takes on a new reality in the time and culture of the staging, and each performance actually brings a new interpretation to the play. Similarly, each translation adds a new meaning to the source text. In other words, the translated theatrical text transcends the boundaries of its language and culture and finds a new interpretation. Thus the translation of drama takes place as a transfer from one culture to another as a cross cultural communication. In this context, translating culture specific references play a key role in terms of reflecting cultural aspects of a target society. This study aims to explore the use of Venuti's translation principles of domestication and foreignization in the transfer of culture specific references in the Turkish translation of Shakespeare's Macbeth. Macbeth is to be compared with its Turkish version in terms of the transference of culture specific references such as religious, witchcraft, and mythological, which have no equivalent in the target language and culture. To evaluate these principles of Venuti, Davies’s translation strategies are also conducted. As a method, for the most part, he predominantly uses Davies’ method of ‘addition’ through adding extra information in the notes. For instance, rather than finding the Turkish renderings of them, the translator mostly chooses to transfer witchcraft references through retaining them in the target text, but he mainly adds extra information about the references in the notes. Therefore, the translator Nutku mostly uses Venuti’s translation principle of foreignization so that he preserves the foreignness of the theatrical text.

Keywords: drama translation, theatrical texts, culture specific references, Macbeth

Procedia PDF Downloads 133
2579 'Value-Based Re-Framing' in Identity-Based Conflicts: A Skill for Mediators in Multi-Cultural Societies

Authors: Hami-Ziniman Revital, Ashwall Rachelly

Abstract:

The conflict resolution realm has developed tremendously during the last half-decade. Three main approaches should be mentioned: an Alternative Dispute Resolution (ADR) suggesting processes such as Arbitration or Interests-based Negotiation was developed as an answer to obligations and rights-based conflicts. The Pragmatic mediation approach focuses on the gap between interests and needs of disputants. The Transformative mediation approach focusses on relations and suits identity-based conflicts. In the current study, we examine the conflictual relations between religious and non-religious Jews in Israel and the impact of three transformative mechanisms: Inter-group recognition, In-group empowerment and Value-based reframing on the relations between the participants. The research was conducted during four facilitated joint mediation classes. A unique finding was found. Using both transformative mechanisms and the Contact Hypothesis criteria, we identify transformation in participants’ relations and a considerable change from anger, alienation, and suspiciousness to an increased understanding, affection and interpersonal concern towards the out-group members. Intergroup Recognition, In-group empowerment, and Values-based reframing were the skills discovered as the main enablers of the change in the relations and the research participants’ fostered mutual recognition of the out-group values and identity-based issues. We conclude this transformation was possible due to a constant intergroup contact, based on the Contact Hypothesis criteria. In addition, as Interests-based mediation uses “Reframing” as a skill to acknowledge both mutual and opposite needs of the disputants, we suggest the use of “Value-based Reframing” in intergroup identity-based conflicts, as a skill contributes to the empowerment and the recognition of both mutual and different out-group values. We offer to implement those insights and skills to assist conflict resolution facilitators in various intergroup identity-based conflicts resolution efforts and to establish further research and knowledge.

Keywords: empowerment, identity-based conflict, intergroup recognition, intergroup relations, mediation skills, multi-cultural society, reframing, value-based recognition

Procedia PDF Downloads 316
2578 Facial Recognition Technology in Institutions of Higher Learning: Exploring the Use in Kenya

Authors: Samuel Mwangi, Josephine K. Mule

Abstract:

Access control as a security technique regulates who or what can access resources. It is a fundamental concept in security that minimizes risks to the institutions that use access control. Regulating access to institutions of higher learning is key to ensure only authorized personnel and students are allowed into the institutions. The use of biometrics has been criticized due to the setup and maintenance costs, hygiene concerns, and trepidations regarding data privacy, among other apprehensions. Facial recognition is arguably a fast and accurate way of validating identity in order to guard protected areas. It guarantees that only authorized individuals gain access to secure locations while requiring far less personal information whilst providing an additional layer of security beyond keys, fobs, or identity cards. This exploratory study sought to investigate the use of facial recognition in controlling access in institutions of higher learning in Kenya. The sample population was drawn from both private and public higher learning institutions. The data is based on responses from staff and students. Questionnaires were used for data collection and follow up interviews conducted to understand responses from the questionnaires. 80% of the sampled population indicated that there were many security breaches by unauthorized people, with some resulting in terror attacks. These security breaches were attributed to stolen identity cases, where staff or student identity cards were stolen and used by criminals to access the institutions. These unauthorized accesses have resulted in losses to the institutions, including reputational damages. The findings indicate that security breaches are a major problem in institutions of higher learning in Kenya. Consequently, access control would be beneficial if employed to curb security breaches. We suggest the use of facial recognition technology, given its uniqueness in identifying users and its non-repudiation capabilities.

Keywords: facial recognition, access control, technology, learning

Procedia PDF Downloads 99
2577 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 337
2576 Providing a Secure Hybrid Method for Graphical Password Authentication to Prevent Shoulder Surfing, Smudge and Brute Force Attack

Authors: Faraji Sepideh

Abstract:

Nowadays, purchase rate of the smart device is increasing and user authentication is one of the important issues in information security. Alphanumeric strong passwords are difficult to memorize and also owners write them down on papers or save them in a computer file. In addition, text password has its own flaws and is vulnerable to attacks. Graphical password can be used as an alternative to alphanumeric password that users choose images as a password. This type of password is easier to use and memorize and also more secure from pervious password types. In this paper we have designed a more secure graphical password system to prevent shoulder surfing, smudge and brute force attack. This scheme is a combination of two types of graphical passwords recognition based and Cued recall based. Evaluation the usability and security of our proposed scheme have been explained in conclusion part.

Keywords: brute force attack, graphical password, shoulder surfing attack, smudge attack

Procedia PDF Downloads 123
2575 An Experience of Translating an Excerpt from Sophie Adonon’s Echos de Femmes from French to English, Using Reverso.

Authors: Michael Ngongeh Mombe

Abstract:

This Paper seeks to investigate an assertion made by some colleagues that there is no need paying a human translator to translate their literary texts, that there are softwares such as Reverso that can be used to do the translation. The main objective of this study is to examine the veracity of this assertion using Reverso to translate a literary text without any post-editing by a human translator. The work is based on two theories: Skopos and Communicative theories of translation. The work is a documentary research where data were collected from published documents in libraries, on the internet and from the translation produced by Reverso. We made a comparative text analyses of both source and target texts in a bid to highlight the weaknesses and strengths of the software. Findings of this work revealed that those who advocate the use of only Machine translation do so in ignorance of the translation mistakes usually made by the software. From the review of all the 268 segments of translation, we found out that the translation produced by Reverso is fraught with errors. We therefore recommend the use of human translators to either do the translation of their literary texts or revise the translation produced by machine to conform to the skopos of the work. This paper is based on Reverso translation. Similar works in the near future will be based on the other translation softwares to determine their weaknesses and strengths.

Keywords: machine translation, human translator, Reverso, literary text

Procedia PDF Downloads 67
2574 The Influence of Japanese Poetry in Spanish Piano Music: Benet Casablancas and Mercedes Zavala’s Haikus

Authors: Isabel Pérez Dobarro

Abstract:

In the mid-twentieth century, Spanish composers started looking beyond the national folkloric tradition (adopted by Albéniz, Granados, and Falla) and Rodrigo’s neoclassicism, and searched for other sources of inspiration. Japanese Haikus fascinated Spanish musicians, who found in their brevity and imagination a new avenue to develop their creativity. The goal of this research is to study how two renowned Spanish authors, Benet Casablancas and Mercedes Zavala, incorporated Haikus into their piano works. Based on Bruhn’s methodology on text and instrumental music relations, and developing a score and text analysis complemented by interviews with both composers, this study has revealed three possible interactions between the Haikus and these composers’ piano writing: inspiration, transmedialization, and mimesis. Findings also include specific technical gestures to support each of these approaches. Commonalities between their pieces and those by other non-Spanish composers such as Jonathan Harvey, John Cage, and Michael Berkeley have also been explored. According to the author's knowledge, this is the first study on the Japanese influence in Spanish piano music. Thus, it opens a new path for understanding musical exchanges between both countries as well as contemporary piano tools that support the interaction between text and music.

Keywords: Haiku, Spanish piano music, Benet Casablancas, Mercedes Zavala

Procedia PDF Downloads 121
2573 Role of mHealth in Effective Response to Disaster

Authors: Mohammad H. Yarmohamadian, Reza Safdari, Nahid Tavakoli

Abstract:

In recent years, many countries have suffered various natural disasters. Disaster response continues to face the challenges in health care sector in all countries. Information and communication management is a significant challenge in disaster scene. During the last decades, rapid advances in information technology have led to manage information effectively and improve communication in health care setting. Information technology is a vital solution for effective response to disasters and emergencies so that if an efficient ICT-based health information system is available, it will be highly valuable in such situation. Of that, mobile technology represents a nearly computing technology infrastructure that is accessible, convenient, inexpensive and easy to use. Most projects have not yet reached the deployment stage, but evaluation exercises show that mHealth should allow faster processing and transport of patients, improved accuracy of triage and better monitoring of unattended patients at a disaster scene. Since there is a high prevalence of cell phones among world population, it is expected the health care providers and managers to take measures for applying this technology for improvement patient safety and public health in disasters. At present there are challenges in the utilization of mhealth in disasters such as lack of structural and financial issues in our country. In this paper we will discuss about benefits and challenges of mhealth technology in disaster setting considering connectivity, usability, intelligibility, communication and teaching for implementing this technology for disaster response.

Keywords: information technology, mhealth, disaster, effective response

Procedia PDF Downloads 409
2572 Animated Poetry-Film: Poetry in Action

Authors: Linette van der Merwe

Abstract:

It is known that visual artists, performing artists, and literary artists have inspired each other since time immemorial. The enduring, symbiotic relationship between the various art genres is evident where words, colours, lines, and sounds act as metaphors, a physical separation of the transcendental reality of art. Simonides of Keos (c. 556-468 BC) confirmed this, stating that a poem is a talking picture, or, in a more modern expression, a picture is worth a thousand words. It can be seen as an ancient relationship, originating from the epigram (tombstone or artefact inscriptions), the carmen figuratum (figure poem), and the ekphrasis (a description in the form of a poem of a work of art). Visual artists, including Michelangelo, Leonardo da Vinci, and Goethe, wrote poems and songs. Goya, Degas, and Picasso are famous for their works of art and for trying their hands at poetry. Afrikaans writers whose fine art is often published together with their writing, as in the case of Andries Bezuidenhout, Breyten Breytenbach, Sheila Cussons, Hennie Meyer, Carina Stander, and Johan van Wyk, among others, are not a strange phenomenon either. Imitating one art form into another art form is a form of translation, transposition, contemplation, and discovery of artistic impressions, showing parallel interpretations rather than physical comparison. It is especially about the harmony that exists between the different art genres, i.e., a poem that describes a painting or a visual text that portrays a poem that becomes a translation, interpretation, and rediscovery of the verbal text, or rather, from the word text to the image text. Poetry-film, as a form of such a translation of the word text into an image text, can be considered a hybrid, transdisciplinary art form that connects poetry and film. Poetry-film is regarded as an intertwined entity of word, sound, and visual image. It is an attempt to transpose and transform a poem into a new artwork that makes the poem more accessible to people who are not necessarily open to the written word and will, in effect, attract a larger audience to a genre that usually has a limited market. Poetry-film is considered a creative expression of an inverted ekphrastic inspiration, a visual description, interpretation, and expression of a poem. Research also emphasises that animated poetry-film is not widely regarded as a genre of anything and is thus severely under-theorized. This paper will focus on Afrikaans animated poetry-films as a multimodal transposition of a poem text to an animated poetry film, with specific reference to animated poetry-films in Filmverse I (2014) and Filmverse II (2016).

Keywords: poetry film, animated poetry film, poetic metaphor, conceptual metaphor, monomodal metaphor, multimodal metaphor, semiotic metaphor, multimodality, metaphor analysis, target domain, source domain

Procedia PDF Downloads 37
2571 Applying Different Stenography Techniques in Cloud Computing Technology to Improve Cloud Data Privacy and Security Issues

Authors: Muhammad Muhammad Suleiman

Abstract:

Cloud Computing is a versatile concept that refers to a service that allows users to outsource their data without having to worry about local storage issues. However, the most pressing issues to be addressed are maintaining a secure and reliable data repository rather than relying on untrustworthy service providers. In this study, we look at how stenography approaches and collaboration with Digital Watermarking can greatly improve the system's effectiveness and data security when used for Cloud Computing. The main requirement of such frameworks, where data is transferred or exchanged between servers and users, is safe data management in cloud environments. Steganography is the cloud is among the most effective methods for safe communication. Steganography is a method of writing coded messages in such a way that only the sender and recipient can safely interpret and display the information hidden in the communication channel. This study presents a new text steganography method for hiding a loaded hidden English text file in a cover English text file to ensure data protection in cloud computing. Data protection, data hiding capability, and time were all improved using the proposed technique.

Keywords: cloud computing, steganography, information hiding, cloud storage, security

Procedia PDF Downloads 163
2570 Burnout Recognition for Call Center Agents by Using Skin Color Detection with Hand Poses

Authors: El Sayed A. Sharara, A. Tsuji, K. Terada

Abstract:

Call centers have been expanding and they have influence on activation in various markets increasingly. A call center’s work is known as one of the most demanding and stressful jobs. In this paper, we propose the fatigue detection system in order to detect burnout of call center agents in the case of a neck pain and upper back pain. Our proposed system is based on the computer vision technique combined skin color detection with the Viola-Jones object detector. To recognize the gesture of hand poses caused by stress sign, the YCbCr color space is used to detect the skin color region including face and hand poses around the area related to neck ache and upper back pain. A cascade of clarifiers by Viola-Jones is used for face recognition to extract from the skin color region. The detection of hand poses is given by the evaluation of neck pain and upper back pain by using skin color detection and face recognition method. The system performance is evaluated using two groups of dataset created in the laboratory to simulate call center environment. Our call center agent burnout detection system has been implemented by using a web camera and has been processed by MATLAB. From the experimental results, our system achieved 96.3% for upper back pain detection and 94.2% for neck pain detection.

Keywords: call center agents, fatigue, skin color detection, face recognition

Procedia PDF Downloads 264
2569 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 80
2568 Prosody of Text Communication: Inducing Synchronization and Coherence in Chat Conversations

Authors: Karolina Ziembowicz, Andrzej Nowak

Abstract:

In the current study, we examined the consequences of adding prosodic cues to text communication by allowing users to observe the process of message creation while engaged in dyadic conversations. In the first condition, users interacted through a traditional chat that requires pressing ‘enter’ to make a message visible to an interlocutor. In another, text appeared on the screen simultaneously as the sender was writing it, letter after letter (Synchat condition), so that users could observe the varying rhythm of message production, precise timing of message appearance, typos and their corrections. The results show that the ability to observe the dynamics of message production had a twofold effect on the social interaction process. First, it enhanced the relational aspect of communication – interlocutors synchronized their emotional states during the interaction, their communication included more statements on relationship building, and they evaluated the Synchat medium as more personal and emotionally engaging. Second, it increased the coherence of communication, reflected in greater continuity of the topics raised in Synchat conversations. The results are discussed from the interaction design (IxD) perspective.

Keywords: chat communication, online conversation, prosody, social synchronization, interaction incoherence, relationship building

Procedia PDF Downloads 120
2567 Optimizing the Readability of Orthopaedic Trauma Patient Education Materials Using ChatGPT-4

Authors: Oscar Covarrubias, Diane Ghanem, Christopher Murdock, Babar Shafiq

Abstract:

Introduction: ChatGPT is an advanced language AI tool designed to understand and generate human-like text. The aim of this study is to assess the ability of ChatGPT-4 to re-write orthopaedic trauma patient education materials at the recommended 6th-grade level. Methods: Two independent reviewers accessed ChatGPT-4 (chat.openai.com) and gave identical instructions to simplify the readability of provided text to a 6th-grade level. All trauma-related articles by the Orthopaedic Trauma Association (OTA) and American Academy of Orthopaedic Surgeons (AAOS) were sequentially provided. The academic grade level was determined using the Flesh-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE). Paired t-tests and Wilcox-rank sum tests were used to compare the FKGL and FRE between the ChatGPT-4 revised and original text. Inter-rater correlation coefficient (ICC) was used to assess variability in ChatGPT-4 generated text between the two reviewers. Results: ChatGPT-4 significantly reduced FKGL and increased FRE scores in the OTA (FKGL: 5.7±0.5 compared to the original 8.2±1.1, FRE: 76.4±5.7 compared to the original 65.5±6.6, p < 0.001) and AAOS articles (FKGL: 5.8±0.8 compared to the original 8.9±0.8, FRE: 76±5.5 compared to the original 56.7±5.9, p < 0.001). On average, 14.6% of OTA and 28.6% of AAOS articles required at least two revisions by ChatGPT-4 to achieve a 6th-grade reading level. ICC demonstrated poor reliability for FKGL (OTA 0.24, AAOS 0.45) and moderate reliability for FRE (OTA 0.61, AAOS 0.73). Conclusion: This study provides a novel, simple and efficient method using language AI to optimize the readability of patient education content which may only require the surgeon’s final proofreading. This method would likely be as effective for other medical specialties.

Keywords: artificial intelligence, AI, chatGPT, patient education, readability, trauma education

Procedia PDF Downloads 47
2566 Architectural Experience of the Everyday in Phuket Old Town

Authors: Thirayu Jumsai na Ayudhya

Abstract:

Initial attempts to understand about what architecture means to people as they go about their everyday life through my previous research revealed that knowledge such as environmental psychology, environmental perception, environmental aesthetics, did not adequately address a perceived need for the contextualized and holistic theoretical framework. In my previous research, it is found that people’s making senses of their everyday architecture can be described in terms of four super‐ordinate themes; (1) building in urban (text), (2) building in (text), (3) building in human (text), (4) and building in time (text). For more comprehensively understanding of how people make sense of their everyday architectural experience, in this ongoing research Phuket Old town was selected as the focal urban context where the distinguish character of Chino-Portuguese is remarkable. It is expected that in a unique urban context like Phuket old town unprecedented super-ordinate themes will be unveiled through the reflection of people’s everyday experiences. The ongoing research of people’s architectural experience conducted in Phuket Island, Thailand, will be presented succinctly. The research will address the question of how do people make sense of their everyday architecture/buildings especially in a unique urban context, Phuket Old town, and identify ways in which people make sense of their everyday architecture. Participant-Produced-Photograph (PPP) and Interpretative Phenomenological Analysis (IPA) are adopted as main methodologies. PPP allows people to express experiences of their everyday urban context freely without any interference or forced-data generating by researchers. With IPA methodology a small pool of participants is considered desirable given the detailed level of analysis required and its potential to produce a meaningful outcome.

Keywords: architectural experience, the everyday architecture, Phuket, Thailand

Procedia PDF Downloads 272
2565 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data

Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis

Abstract:

Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.

Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction

Procedia PDF Downloads 561
2564 National Image in the Age of Mass Self-Communication: An Analysis of Internet Users' Perception of Portugal

Authors: L. Godinho, N. Teixeira

Abstract:

Nowadays, massification of Internet access represents one of the major challenges to the traditional powers of the State, among which the power to control its external image. The virtual world has also sparked the interest of social sciences which consider it a new field of study, an immense open text where sense is expressed. In this paper, that immense text has been accessed to so as to understand the perception Internet users from all over the world have of Portugal. Ours is a quantitative and qualitative approach, as we have resorted to buzz, thematic and category analysis. The results confirm the predominance of sea stereotype in others' vision of the Portuguese people, and evidence that national image has adapted to network communication through processes of individuation and paganization.

Keywords: national image, internet, self-communication, perception

Procedia PDF Downloads 237
2563 One-Shot Text Classification with Multilingual-BERT

Authors: Hsin-Yang Wang, K. M. A. Salam, Ying-Jia Lin, Daniel Tan, Tzu-Hsuan Chou, Hung-Yu Kao

Abstract:

Detecting user intent from natural language expression has a wide variety of use cases in different natural language processing applications. Recently few-shot training has a spike of usage on commercial domains. Due to the lack of significant sample features, the downstream task performance has been limited or leads to an unstable result across different domains. As a state-of-the-art method, the pre-trained BERT model gathering the sentence-level information from a large text corpus shows improvement on several NLP benchmarks. In this research, we are proposing a method to change multi-class classification tasks into binary classification tasks, then use the confidence score to rank the results. As a language model, BERT performs well on sequence data. In our experiment, we change the objective from predicting labels into finding the relations between words in sequence data. Our proposed method achieved 71.0% accuracy in the internal intent detection dataset and 63.9% accuracy in the HuffPost dataset. Acknowledgment: This work was supported by NCKU-B109-K003, which is the collaboration between National Cheng Kung University, Taiwan, and SoftBank Corp., Tokyo.

Keywords: OSML, BERT, text classification, one shot

Procedia PDF Downloads 80
2562 Freedom of Information and Freedom of Expression

Authors: Amin Pashaye Amiri

Abstract:

Freedom of information, according to which the public has a right to have access to government-held information, is largely considered as a tool for improving transparency and accountability in governments, and as a requirement of self-governance and good governance. So far, more than ninety countries have recognized citizens’ right to have access to public information. This recognition often took place through the adoption of an act referred to as “freedom of information act”, “access to public records act”, and so on. A freedom of information act typically imposes a positive obligation on a government to initially and regularly release certain public information, and also obliges it to provide individuals with information they request. Such an act usually allows governmental bodies to withhold information only when it falls within a limited number of exemptions enumerated in the act such as exemptions for protecting privacy of individuals and protecting national security. Some steps have been taken at the national and international level towards the recognition of freedom of information as a human right. Freedom of information was recognized in a few countries as a part of freedom of expression, and therefore, as a human right. Freedom of information was also recognized by some international bodies as a human right. The Inter-American Court of Human Rights ruled in 2006 that Article 13 of the American Convention on Human Rights, which concerns the human right to freedom of expression, protects the right of all people to request access to government information. The European Court of Human Rights has recently taken a considerable step towards recognizing freedom of information as a human right. However, in spite of the measures that have been taken, public access to government information is not yet widely accepted as an international human right. The paper will consider the degree to which freedom of information has been recognized as a human right, and study the possibility of widespread recognition of such a human right in the future. It will also examine the possible benefits of such recognition for the development of the human right to free expression.

Keywords: freedom of information, freedom of expression, human rights, government information

Procedia PDF Downloads 517
2561 From Text to Data: Sentiment Analysis of Presidential Election Political Forums

Authors: Sergio V Davalos, Alison L. Watkins

Abstract:

User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.

Keywords: sentiment analysis, text mining, user generated content, US presidential elections

Procedia PDF Downloads 160
2560 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances

Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim

Abstract:

This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.

Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering

Procedia PDF Downloads 163
2559 Automatic Tagging and Accuracy in Assamese Text Data

Authors: Chayanika Hazarika Bordoloi

Abstract:

This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.

Keywords: CRF, morphology, tagging, tagset

Procedia PDF Downloads 171
2558 Combined Automatic Speech Recognition and Machine Translation in Business Correspondence Domain for English-Croatian

Authors: Sanja Seljan, Ivan Dunđer

Abstract:

The paper presents combined automatic speech recognition (ASR) for English and machine translation (MT) for English and Croatian in the domain of business correspondence. The first part presents results of training the ASR commercial system on two English data sets, enriched by error analysis. The second part presents results of machine translation performed by online tool Google Translate for English and Croatian and Croatian-English language pairs. Human evaluation in terms of usability is conducted and internal consistency calculated by Cronbach's alpha coefficient, enriched by error analysis. Automatic evaluation is performed by WER (Word Error Rate) and PER (Position-independent word Error Rate) metrics, followed by investigation of Pearson’s correlation with human evaluation.

Keywords: automatic machine translation, integrated language technologies, quality evaluation, speech recognition

Procedia PDF Downloads 454
2557 Development of Innovative Islamic Web Applications

Authors: Farrukh Shahzad

Abstract:

The rich Islamic resources related to religious text, Islamic sciences, and history are widely available in print and in electronic format online. However, most of these works are only available in Arabic language. In this research, an attempt is made to utilize these resources to create interactive web applications in Arabic, English and other languages. The system utilizes the Pattern Recognition, Knowledge Management, Data Mining, Information Retrieval and Management, Indexing, storage and data-analysis techniques to parse, store, convert and manage the information from authentic Arabic resources. These interactive web Apps provide smart multi-lingual search, tree based search, on-demand information matching and linking. In this paper, we provide details of application architecture, design, implementation and technologies employed. We also presented the summary of web applications already developed. We have also included some screen shots from the corresponding web sites. These web applications provide an Innovative On-line Learning Systems (eLearning and computer based education).

Keywords: Islamic resources, Muslim scholars, hadith, narrators, history, fiqh

Procedia PDF Downloads 259
2556 Against Language Disorder: A Way of Reading Dialects in Yan Lianke’s Novels

Authors: Thuy Hanh Nguyen Thi

Abstract:

By the method of deep reading and text analysis, this article will analyze the use and creation of dialects as a way of demonstrating Yan Lianke's creative stance. This article indicates that this is the writer’s narrative strategy in a fight against aphasia, a language disorder of Chinese people and culture, demonstrating a sense of return to folklore and marks his own linguistic style. In terms of verbal text, the dialect in the Yan Lianke’s novels manifested through the use of words, sentences and dialects. There are two types of dialects that exist in Yan Lianke’s novels: the current dialect system and the particular dialect system of Pa Lau world created by the writer himself in order to enrich the vocabulary of Han Chinese.

Keywords: Yan Lianke , aphasia, dialect, Pa Lou world

Procedia PDF Downloads 100