Search results for: lexical encoding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 439

Search results for: lexical encoding

229 Latency-Based Motion Detection in Spiking Neural Networks

Authors: Mohammad Saleh Vahdatpour, Yanqing Zhang

Abstract:

Understanding the neural mechanisms underlying motion detection in the human visual system has long been a fascinating challenge in neuroscience and artificial intelligence. This paper presents a spiking neural network model inspired by the processing of motion information in the primate visual system, particularly focusing on the Middle Temporal (MT) area. In our study, we propose a multi-layer spiking neural network model to perform motion detection tasks, leveraging the idea that synaptic delays in neuronal communication are pivotal in motion perception. Synaptic delay, determined by factors like axon length and myelin insulation, affects the temporal order of input spikes, thereby encoding motion direction and speed. Overall, our spiking neural network model demonstrates the feasibility of capturing motion detection principles observed in the primate visual system. The combination of synaptic delays, learning mechanisms, and shared weights and delays in SMD provides a promising framework for motion perception in artificial systems, with potential applications in computer vision and robotics.

Keywords: neural network, motion detection, signature detection, convolutional neural network

Procedia PDF Downloads 48
228 Probing Syntax Information in Word Representations with Deep Metric Learning

Authors: Bowen Ding, Yihao Kuang

Abstract:

In recent years, with the development of large-scale pre-trained lan-guage models, building vector representations of text through deep neural network models has become a standard practice for natural language processing tasks. From the performance on downstream tasks, we can know that the text representation constructed by these models contains linguistic information, but its encoding mode and extent are unclear. In this work, a structural probe is proposed to detect whether the vector representation produced by a deep neural network is embedded with a syntax tree. The probe is trained with the deep metric learning method, so that the distance between word vectors in the metric space it defines encodes the distance of words on the syntax tree, and the norm of word vectors encodes the depth of words on the syntax tree. The experiment results on ELMo and BERT show that the syntax tree is encoded in their parameters and the word representations they produce.

Keywords: deep metric learning, syntax tree probing, natural language processing, word representations

Procedia PDF Downloads 32
227 Corporate Cautionary Statement: A Genre of Professional Communication

Authors: Chie Urawa

Abstract:

Cautionary statements or disclaimers in corporate annual reports need to be carefully designed because clear cautionary statements may protect a company in the case of legal disputes and may undermine positive impressions. This study compares the language of cautionary statements using two corpora, Sony’s cautionary statement corpus (S-corpus) and Panasonic’s cautionary statement corpus (P-corpus), illustrating the differences and similarities in relation to the use of meaningful cautionary statements and critically analyzing why practitioners use the way. The findings describe the distinct differences between the two companies in the presentation of the risk factors and the way how they make the statements. The word ability is used more for legal protection in S-corpus whereas the word possibility is used more to convey a better impression in P-corpus. The main similarities are identified in the use of lexical words and pronouns, and almost the same wordings for eight years. The findings show how they make the statements unique to the company in the presentation of risk factors, and the characteristics of specific genre of professional communication. Important implications of this study are that more comprehensive approach can be applied in other contexts, and be used by companies to reflect upon their cautionary statements.

Keywords: cautionary statements, corporate annual reports, corpus, risk factors

Procedia PDF Downloads 138
226 The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application

Authors: Walid Hassairi, Moncef Bousselmi, Mohamed Abid

Abstract:

Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.

Keywords: hardware/software, co-design, co-simulation, systemc, matlab, s-function, communication, synchronization

Procedia PDF Downloads 359
225 A Guide to User-Friendly Bash Prompt: Adding Natural Language Processing Plus Bash Explanation to the Command Interface

Authors: Teh Kean Kheng, Low Soon Yee, Burra Venkata Durga Kumar

Abstract:

In 2022, as the future world becomes increasingly computer-related, more individuals are attempting to study coding for themselves or in school. This is because they have discovered the value of learning code and the benefits it will provide them. But learning coding is difficult for most people. Even senior programmers that have experience for a decade year still need help from the online source while coding. The reason causing this is that coding is not like talking to other people; it has the specific syntax to make the computer understand what we want it to do, so coding will be hard for normal people if they don’t have contact in this field before. Coding is hard. If a user wants to learn bash code with bash prompt, it will be harder because if we look at the bash prompt, we will find that it is just an empty box and waiting for a user to tell the computer what we want to do, if we don’t refer to the internet, we will not know what we can do with the prompt. From here, we can conclude that the bash prompt is not user-friendly for new users who are learning bash code. Our goal in writing this paper is to give an idea to implement a user-friendly Bash prompt in Ubuntu OS using Artificial Intelligent (AI) to lower the threshold of learning in Bash code, to make the user use their own words and concept to write and learn Bash code.

Keywords: user-friendly, bash code, artificial intelligence, threshold, semantic similarity, lexical similarity

Procedia PDF Downloads 90
224 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 163
223 The Use of Hedging Devices in Studens’ Oral Presentation

Authors: Siti Navila

Abstract:

Hedging as a kind of pragmatic competence is an essential part in achieving the goal in communication, especially in academic discourse where the process of sharing knowledge among academic community takes place. Academic discourse demands an appropriateness and modesty of an author or speaker in stating arguments, to name but few, by considering the politeness, being cautious and tentative, and differentiating personal opinions and facts in which these aspects can be achieved through hedging. This study was conducted to find the hedging devices used by students as well as to analyze how they use them in their oral presentation. Some oral presentations from English Department students of the State University of Jakarta on their Academic Presentation course final test were recorded and explored formally and functionally. It was found that the most frequent hedging devices used by students were shields from all hedging devices that students commonly used when they showed suggestion, stated claims, showed opinion to provide possible but still valid answer, and offered the appropriate solution. The researcher suggests that hedging can be familiarized in learning, since potential conflicts that is likely to occur while delivering ideas in academic contexts such as disagreement, criticism, and personal judgment can be reduced with the use of hedging. It will also benefit students in achieving the academic competence with an ability to demonstrate their ideas appropriately and more acceptable in academic discourse.

Keywords: academic discourse, hedging, hedging devices, lexical hedges, Meyer classification

Procedia PDF Downloads 436
222 Use of Computer and Machine Learning in Facial Recognition

Authors: Neha Singh, Ananya Arora

Abstract:

Facial expression measurement plays a crucial role in the identification of emotion. Facial expression plays a key role in psychophysiology, neural bases, and emotional disorder, to name a few. The Facial Action Coding System (FACS) has proven to be the most efficient and widely used of the various systems used to describe facial expressions. Coders can manually code facial expressions with FACS and, by viewing video-recorded facial behaviour at a specified frame rate and slow motion, can decompose into action units (AUs). Action units are the most minor visually discriminable facial movements. FACS explicitly differentiates between facial actions and inferences about what the actions mean. Action units are the fundamental unit of FACS methodology. It is regarded as the standard measure for facial behaviour and finds its application in various fields of study beyond emotion science. These include facial neuromuscular disorders, neuroscience, computer vision, computer graphics and animation, and face encoding for digital processing. This paper discusses the conceptual basis for FACS, a numerical listing of discrete facial movements identified by the system, the system's psychometric evaluation, and the software's recommended training requirements.

Keywords: facial action, action units, coding, machine learning

Procedia PDF Downloads 75
221 Effect of Radiotherapy/Chemotherapy Protocol on the Gut Microbiome in Pediatric Cancer Patients

Authors: Nourhan G. Sahly, Ahmed Moustafa, Mohamed S. Zaghloul, Tamer Z. Salem

Abstract:

The gut microbiome plays important roles in the human body that includes but not limited to digestion, immunity, homeostasis and response to some drugs such as chemotherapy and immunotherapy. Its role has also been linked to radiotherapy and associated gastrointestinal injuries, where the microbial dysbiosis could be the driving force for dose determination or the complete suspension of the treatment protocol. Linking the gut microbiota alterations to different cancer treatment protocols is not easy especially in humans. However, enormous effort was exerted to understand this complex relationship. In the current study, we described the gut microbiota dysbiosis in pediatric sarcoma patients, in the pelvic region, with regards to radiotherapy and antibiotics. Fecal samples were collected as a source of microbial DNA for which the gene encoding for V3-V5 regions of 16S rRNA was sequenced. Two of the three patients understudy had experienced an increase in alpha diversity post exposure to 50.4 Gy. Although phylum Firmicutes overall relative abundance has generally decreased, six of its taxa increased in all patients. Our results may indicate the possibility of radiosensitivity or enrichment of the antibiotic resistance of the elevated taxa. Further studies are needed to describe the extent of radiosensitivity with regards to antibiotic resistance.

Keywords: combined radiotherapy and chemotherapy, gut microbiome, pediatric cancer, radiosensitivity

Procedia PDF Downloads 126
220 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 16
219 The Influence of Screen Translation on Creative Audiovisual Writing: A Corpus-Based Approach

Authors: John D. Sanderson

Abstract:

The popularity of American cinema worldwide has contributed to the development of sociolects related to specific film genres in other cultural contexts by means of screen translation, in many cases eluding norms of usage in the target language, a process whose result has come to be known as 'dubbese'. A consequence for the reception in countries where local audiovisual fiction consumption is far lower than American imported productions is that this linguistic construct is preferred, even though it differs from common everyday speech. The iconography of film genres such as science-fiction, western or sword-and-sandal films, for instance, generates linguistic expectations in international audiences who will accept more easily the sociolects assimilated by the continuous reception of American productions, even if the themes, locations, characters, etc., portrayed on screen may belong in origin to other cultures. And the non-normative language (e.g., calques, semantic loans) used in the preferred mode of linguistic transfer, whether it is translation for dubbing or subtitling, has diachronically evolved in many cases into a status of canonized sociolect, not only accepted but also required, by foreign audiences of American films. However, a remarkable step forward is taken when this typology of artificial linguistic constructs starts being used creatively by nationals of these target cultural contexts. In the case of Spain, the success of American sitcoms such as Friends in the 1990s led Spanish television scriptwriters to include in national productions lexical and syntactical indirect borrowings (Anglicisms not formally identifiable as such because they include elements from their own language) in order to target audiences of the former. However, this commercial strategy had already taken place decades earlier when Spain became a favored location for the shooting of foreign films in the early 1960s. The international popularity of the then newly developed sub-genre known as Spaghetti-Western encouraged Spanish investors to produce their own movies, and local scriptwriters made use of the dubbese developed nationally since the advent of sound in film instead of using normative language. As a result, direct Anglicisms, as well as lexical and syntactical borrowings made up the creative writing of these Spanish productions, which also became commercially successful. Interestingly enough, some of these films were even marketed in English-speaking countries as original westerns (some of the names of actors and directors were anglified to that purpose) dubbed into English. The analysis of these 'back translations' will also foreground some semantic distortions that arose in the process. In order to perform the research on these issues, a wide corpus of American films has been used, which chronologically range from Stagecoach (John Ford, 1939) to Django Unchained (Quentin Tarantino, 2012), together with a shorter corpus of Spanish films produced during the golden age of Spaghetti Westerns, from una tumba para el sheriff (Mario Caiano; in English lone and angry man, William Hawkins) to tu fosa será la exacta, amigo (Juan Bosch, 1972; in English my horse, my gun, your widow, John Wood). The methodology of analysis and the conclusions reached could be applied to other genres and other cultural contexts.

Keywords: dubbing, film genre, screen translation, sociolect

Procedia PDF Downloads 134
218 Negativization: A Focus Strategy in Basà Language

Authors: Imoh Philip

Abstract:

Basà language is classified as belonging to Kainji family, under the sub-phylum Western-Kainji known as Rubasa (Basa Benue) (Croizier & Blench, 1992:32). Basà is an under-described language spoken in the North-Central Nigeria. The language is characterized by subject-verb-object (henceforth SVO) as its canonical word order. Data for this work is sourced from the researcher’s native intuition of the language corroborated with a careful observation of native speakers. This paper investigates the syntactic derivational strategy of information-structure encoding in Basà language. It emphasizes on a negative operator, as a strategy for focusing a constituent or clause that follows it and negativizes a whole proposition. For items that are not nouns, they have to undergo an obligatory nominalization process, either by affixation, modification or conversion before they are moved to the pre verbal position for these operations. The study discovers and provides evidence of the fact showing that deferent constituents in the sentence such as the subject, direct, indirect object, genitive, verb phrase, prepositional phrase, clause and idiophone, etc. can be focused with the same negativizing operator. The process is characterized by focusing the pre verbal NP constituent alone, whereas the whole proposition is negated. The study can stimulate similar study or be replicated in other languages.

Keywords: negation, focus, Basà, nominalization

Procedia PDF Downloads 570
217 A Cognitive Semantic Analysis of the Metaphorical Extensions of Come out and Take Over

Authors: Raquel Rossini, Edelvais Caldeira

Abstract:

The aim of this work is to investigate the motivation for the metaphorical uses of two verb combinations: come out and take over. Drawing from cognitive semantics theories, image schemas and metaphors, it was attempted to demonstrate that: a) the metaphorical senses of both 'come out' and 'take over' extend from both the verbs and the particles central (spatial) senses in such verb combinations; and b) the particles 'out' and 'over' also contribute to the whole meaning of the verb combinations. In order to do so, a random selection of 579 concordance lines for come out and 1,412 for take over was obtained from the Corpus of Contemporary American English – COCA. One of the main procedures adopted in the present work was the establishment of verb and particle central senses. As per the research questions addressed in this study, they are as follows: a) how does the identification of trajector and landmark help reveal patterns that contribute for the identification of the semantic network of these two verb combinations?; b) what is the relationship between the schematic structures attributed to the particles and the metaphorical uses found in empirical data?; and c) what conceptual metaphors underlie the mappings from the source to the target domains? The results demonstrated that not only the lexical verbs come and take, but also the particles out and over play an important whole in the different meanings of come out and take over. Besides, image schemas and conceptual metaphors were found to be helpful in order to establish the motivations for the metaphorical uses of these linguistic structures.

Keywords: cognitive linguistics, English syntax, multi-word verbs, prepositions

Procedia PDF Downloads 123
216 Comprehending the Relationship between the Red Blood Cells of a Protein 4.1 -/- Patient and Those of Healthy Controls: A Comprehensive Analysis of Tandem Mass Spectrometry Data

Authors: Ahmed M. Hjazi, Bader M. Hjazi

Abstract:

Protein 4.1 is a crucial component of complex interactions between the cytoskeleton and other junctional complex proteins. When the gene encoding this protein is altered, resulting in reduced expression, or when the protein is absent, the red cell undergoes a significant structural change. This research aims to achieve a deeper comprehension of the biochemical effects of red cell protein deficiency. A Tandem Mass Spectrometry Analysis (TMT-MS/MS) of patient cells lacking protein 4.1 compared to three healthy controls was achieved by the Proteomics Institute of the University of Bristol. The SDS-PAGE and Western blotting were utilized on the original patient sample and controls to partially confirm TMT MS/MS data analysis of the protein-4.1-deficient cells. Compared to healthy controls, protein levels in samples lacking protein 4.1 had a significantly higher concentration of proteins that probably originated from reticulocytes. This could occur if the patient has an elevated reticulocyte count. The increase in chaperone and reticulocyte-associated proteins was most notable in this study. This may result from elevated quantities of reticulocytes in patients with hereditary elliptocytosis.

Keywords: hereditary elliptocytosis, protein 4.1, red cells, tandem mass spectrometry data.

Procedia PDF Downloads 51
215 Learning Chinese Suprasegmentals for a Better Communicative Performance

Authors: Qi Wang

Abstract:

Chinese has become a powerful worldwide language and millions of learners are studying it all over the words. Chinese is a tone language with unique meaningful characters, which makes foreign learners master it with more difficulties. On the other hand, as each foreign language, the learners of Chinese first will learn the basic Chinese Sound Structure (the initials and finals, tones, Neutral Tone and Tone Sandhi). It’s quite common that in the following studies, teachers made a lot of efforts on drilling and error correcting, in order to help students to pronounce correctly, but ignored the training of suprasegmental features (e.g. stress, intonation). This paper analysed the oral data based on our graduation students (two-year program) from 2006-2013, presents the intonation pattern of our graduates to speak Chinese as second language -high and plain with heavy accents, without lexical stress, appropriate stop endings and intonation, which led to the misunderstanding in different real contexts of communications and the international official Chinese test, e.g. HSK (Chinese Proficiency Test), HSKK (HSK Speaking Test). This paper also demonstrated how the Chinese to use the suprasegmental features strategically in different functions and moods (declarative, interrogative, imperative, exclamatory and rhetorical intonations) in order to train the learners to achieve better Communicative Performance.

Keywords: second language learning, suprasegmental, communication, HSK (Chinese Proficiency Test)

Procedia PDF Downloads 414
214 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems

Authors: Hala Zaghloul, Taymoor Nazmy

Abstract:

One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.

Keywords: cognitive system, image processing, segmentation, PCNN kernels

Procedia PDF Downloads 252
213 Arousal, Encoding, And Intrusive Memories

Authors: Hannah Gutmann, Rick Richardson, Richard Bryant

Abstract:

Intrusive memories following a traumatic event are not uncommon. However, in some individuals, these memories become maladaptive and lead to prolonged stress reactions. A seminal model of PTSD explains that aberrant processing during trauma may lead to prolonged stress reactions and intrusive memories. This model explains that elevated arousal at the time of the trauma promotes data driven processing, leading to fragmented and intrusive memories. This study investigated the role of elevated arousal on the development of intrusive memories. We measured salivary markers of arousal and investigated what impact this had on data driven processing, memory fragmentation, and subsequently, the development of intrusive memories. We assessed 100 healthy participants to understand their processing style, arousal, and experience of intrusive memories. Participants were randomised to a control or experimental condition, the latter of which was designed to increase their arousal. Based on current theory, participants in the experimental condition were expected to engage in more data driven processing and experience more intrusive memories than participants in the control condition. This research aims to shed light on the mechanisms underlying the development of intrusive memories to illustrate ways in which therapeutic approaches for PTSD may be augmented for greater efficacy.

Keywords: stress, cortisol, SAA, PTSD, intrusive memories

Procedia PDF Downloads 164
212 Hardware Implementation of Local Binary Pattern Based Two-Bit Transform Motion Estimation

Authors: Seda Yavuz, Anıl Çelebi, Aysun Taşyapı Çelebi, Oğuzhan Urhan

Abstract:

Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.

Keywords: binarization, hardware architecture, local binary pattern, motion estimation, two-bit transform

Procedia PDF Downloads 274
211 The Association Between COL4A3 Variant RS55703767 With the Susceptibility to Diabetic Kidney Disease in Patients with Type 2 Diabetes Mellitus: Results from the Cohort Study

Authors: Zi-Han Li, Zi-Jun Sun, Dong-Yuan Chang, Li Zhu, Min Chen, Ming-Hui Zhao

Abstract:

Aims: A genome-wide association study (GWAS) reported that patients with the rs55703767 minor allele in collagen type IV α3 chain encoding gene COL4A3 showed protection against diabetic kidney disease (DKD) in type 1 diabetes mellitus (T1DM). However, the role of rs55703767 in type 2 DKD has not been elucidated. The aim of the current study was to investigate the association between COL4A3 variant rs55703767 and DKD risk in Chinese patients with type 2 diabetes mellitus (T2DM). Methods: This nested case-control study was performed on 1311 patients who had T2DM for at least 10 years, including 580 with DKD and 731 without DKD. We detected the genotypes of all patients by TaqMan SNP Genotyping Assay and analyzed the association between COL4A3 variant rs55703767 and DKD risk. Results: Genetic analysis revealed that there was no significant difference between T2DM patients with DKD and those without DKD regarding allele or genotype frequencies of rs55703767, and the effect of this variant was not hyperglycemia specific. Conclusion: Our findings suggested that there was no detectable association between the COL4A3 variant rs55703767 and the susceptibility to DKD in the Chinese T2DM population.

Keywords: collagen type IV α3 chain, gene polymorphism, type 2 diabetes, diabetic kidney disease

Procedia PDF Downloads 72
210 Towards Long-Range Pixels Connection for Context-Aware Semantic Segmentation

Authors: Muhammad Zubair Khan, Yugyung Lee

Abstract:

Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with bi-directional LSTM embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization for reducing internal covariate shift in data distributions. The empirical evidence shows a promising response to our method compared with other semantic segmentation techniques.

Keywords: deep learning, semantic segmentation, image analysis, pixels connection, convolution neural network

Procedia PDF Downloads 76
209 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition

Authors: Anes Enakoa, Yawei Liang

Abstract:

Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.

Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment

Procedia PDF Downloads 120
208 Mutations in the GJB2 Gene Are the Cause of an Important Number of Non-Syndromic Deafness Cases

Authors: Habib Onsori, Somayeh Akrami, Mohammad Rahmati

Abstract:

Deafness is the most common sensory disorder with the frequency of 1/1000 in many populations. Mutations in the GJB2 (CX26) gene at the DFNB1 locus on chromosome 13q12 are associated with congenital hearing loss. Approximately 80% of congenital hearing loss cases are recessively inherited and 15% dominantly inherited. Mutations of the GJB2 gene, encoding gap junction protein Connexin 26 (Cx26), are the most common cause of hereditary congenital hearing loss in many countries. This report presents two cases of different mutations from Iranian patients with bilateral hearing loss. DNA studies were performed for the GJB2 gene by PCR and sequencing methods. In one of them, direct sequencing of the gene showed a heterozygous T→C transition at nucleotide 604 resulting in a cysteine to arginine amino acid substitution at codon 202 (C202R) in the fourth extracellular domain (TM4) of the protein. The analyses indicate that the C202R mutation appeared de novo in the proband with a possible dominant effect (GenBank: KF 638275). In the other one, DNA sequencing revealed a compound heterozygous mutation (35delG, 363delC) in the Cx26 gene that is strongly associated with congenital non-syndromic hearing loss (NSHL). So screening the mutations for hearing loss individuals referring to genetics counseling centers before marriage and or pregnancy is recommended.

Keywords: CX26, deafness, GJB2, mutation

Procedia PDF Downloads 458
207 Assessing Language Dominance in Mexican Deaf Signers with the Bilingual Language Profile (BLP)

Authors: E. Mendoza, D. Jackson-Maldonado, G. Avecilla-Ramírez, A. Mondaca

Abstract:

Assessing language proficiency is a major issue in psycholinguistic research. There are multiple tools that measure language dominance and language proficiency in hearing bilinguals, however, this is not the case for Deaf bilinguals. Specifically, there are few, if not none, assessment tools useful in the description of the multilingual abilities of Mexican Deaf signers. Because of this, the linguistic characteristics of Mexican Deaf population have been poorly described. This paper attempts to explain the necessary changes done in order to adapt the Bilingual Language Profile (BLP) to Mexican Sign Language (LSM) and written/oral Spanish. BLP is a Self-Evaluation tool that has been adapted and translated to several oral languages, but not to sign languages. Lexical, syntactic, cultural, and structural changes were applied to the BLP. 35 Mexican Deaf signers participated in a pilot study. All of them were enrolled in Higher Education programs. BLP was presented online in written Spanish via Google Forms. No additional information in LSM was provided. Results show great heterogeneity as it is expected of Deaf populations and BLP seems to be a useful tool to create a bilingual profile of the Mexican Deaf population. This is a first attempt to adapt a widely tested tool in bilingualism research to sign language. Further modifications need to be done.

Keywords: deaf bilinguals, assessment tools, bilingual language profile, mexican sign language

Procedia PDF Downloads 122
206 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization

Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung

Abstract:

In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.

Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution

Procedia PDF Downloads 133
205 An Analysis of Language Borrowing among Algerian University Students Using Online Facebook Conversations

Authors: Messaouda Annab

Abstract:

The rapid development of technology has led to an important context in which different languages and structures are used in the same conversations. This paper investigates the practice of language borrowing within social media platform, namely, Facebook among Algerian Vernacular Arabic (AVA) students. In other words, this study will explore how Algerian students have incorporated lexical English borrowing in their online conversations. This paper will examine the relationships between language, culture and identity among a multilingual group. The main objective is to determine the cultural and linguistic functions that borrowing fulfills in social media and to explain the possible factors underlying English borrowing. The nature of the study entails the use of an online research method that includes ten online Facebook conversations in the form of private messages collected from Bachelor and Masters Algerian students recruited from the English department at the University of Oum El-Bouaghi. The analysis of data revealed that social media platform provided the users with opportunities to shift from one language to another. This practice was noticed in students’ online conversations. English borrowing was the most relevant language performance in accordance with Arabic which is the mother tongue of the chosen sample. The analysis has assumed that participants are skilled in more than one language.

Keywords: borrowing, language performance, linguistic background, social media

Procedia PDF Downloads 128
204 Efficiency of Google Translate and Bing Translator in Translating Persian-to-English Texts

Authors: Samad Sajjadi

Abstract:

Machine translation is a new subject increasingly being used by academic writers, especially students and researchers whose native language is not English. There are numerous studies conducted on machine translation, but few investigations have assessed the accuracy of machine translation from Persian to English at lexical, semantic, and syntactic levels. Using Groves and Mundt’s (2015) Model of error taxonomy, the current study evaluated Persian-to-English translations produced by two famous online translators, Google Translate and Bing Translator. A total of 240 texts were randomly selected from different academic fields (law, literature, medicine, and mass media), and 60 texts were considered for each domain. All texts were rendered by the two translation systems and then by four human translators. All statistical analyses were applied using SPSS. The results indicated that Google translations were more accurate than the translations produced by the Bing Translator, especially in the domains of medicine (lexis: 186 vs. 225; semantic: 44 vs. 48; syntactic: 148 vs. 264 errors) and mass media (lexis: 118 vs. 149; semantic: 25 vs. 32; syntactic: 110 vs. 220 errors), respectively. Nonetheless, both machines are reasonably accurate in Persian-to-English translation of lexicons and syntactic structures, particularly from mass media and medical texts.

Keywords: machine translations, accuracy, human translation, efficiency

Procedia PDF Downloads 42
203 Role of Endonuclease G in Exogenous DNA Stability in HeLa Cells

Authors: Vanja Misic, Mohamed El-Mogy, Yousef Haj-Ahmad

Abstract:

Endonuclease G (EndoG) is a well conserved mitochondrio-nuclear nuclease with dual lethal and vital roles in the cell. The aim of our study was to examine whether EndoG exerts its nuclease activity on exogenous DNA substrates such as plasmid DNA (pDNA), considering their importance in gene therapy applications. The effects of EndoG knockdown on pDNA stability and levels of encoded reporter gene expression were evaluated in the cervical carcinoma HeLa cells. Transfection of pDNA vectors encoding short-hairpin RNAs (shRNAs) reduced levels of EndoG mRNA and nuclease activity in HeLa cells. In physiological circumstances, EndoG knockdown did not have an effect on the stability of pDNA or the levels of encoded transgene expression as measured over a four day time-course. However, when endogenous expression of EndoG was induced by an extrinsic stimulus, targeting of EndoG by shRNA improved the perceived stability and transgene expression of pDNA vectors. Therefore, EndoG is not a mediator of exogenous DNA clearance, but in non-physiological circumstances it may non-specifically cleave intracellular DNA regardless of its origin. These findings make it unlikely that targeting of EndoG is a viable strategy for improving the duration and level of transgene expression from non-viral DNA vectors in gene therapy efforts.

Keywords: EndoG, silencing, exogenous DNA stability, HeLa cells

Procedia PDF Downloads 434
202 Encryption and Decryption of Nucleic Acid Using Deoxyribonucleic Acid Algorithm

Authors: Iftikhar A. Tayubi, Aabdulrahman Alsubhi, Abdullah Althrwi

Abstract:

The deoxyribonucleic acid text provides a single source of high-quality Cryptography about Deoxyribonucleic acid sequence for structural biologists. We will provide an intuitive, well-organized and user-friendly web interface that allows users to encrypt and decrypt Deoxy Ribonucleic Acid sequence text. It includes complex, securing by using Algorithm to encrypt and decrypt Deoxy Ribonucleic Acid sequence. The utility of this Deoxy Ribonucleic Acid Sequence Text is that, it can provide a user-friendly interface for users to Encrypt and Decrypt store the information about Deoxy Ribonucleic Acid sequence. These interfaces created in this project will satisfy the demands of the scientific community by providing fully encrypt of Deoxy Ribonucleic Acid sequence during this website. We have adopted a methodology by using C# and Active Server Page.NET for programming which is smart and secure. Deoxy Ribonucleic Acid sequence text is a wonderful piece of equipment for encrypting large quantities of data, efficiently. The users can thus navigate from one encoding and store orange text, depending on the field for user’s interest. Algorithm classification allows a user to Protect the deoxy ribonucleic acid sequence from change, whether an alteration or error occurred during the Deoxy Ribonucleic Acid sequence data transfer. It will check the integrity of the Deoxy Ribonucleic Acid sequence data during the access.

Keywords: algorithm, ASP.NET, DNA, encrypt, decrypt

Procedia PDF Downloads 202
201 Analytical Study of Infidelity in Translation with Reference to Literary Texts

Authors: Ruqaya Sabeeh Al-Taie

Abstract:

The present study strives to answer the question if translation is sometimes betrayal of the original or not. Such a question emanates from the Italian phrase traduttore-traditore – ‘translator, traitor’ or betrayer, which constitutes a problem for all translators since the lexical words, linguistic structures and cultural terms sometimes do not have literal equivalents in diverse languages. To answer the debated question of fidelity and infidelity in translation, and ascertain the implication of the above Italian phrase, the researcher has collected different kinds of parallel texts which are analyzed to examine the reasons behind the translator’s infidelity in translation in general, and in translating literary texts in particular, and how infidelity can be intended and/or unintended by the translator. It has been found that there are four reasons behind intended infidelity: deliberate adaptation to fit the original, modification for specific purposes, translator’s desire, and unethical translation in favor of government or interest group monopolization; whereas there are also four different motives behind unintended infidelity: translator’s misunderstanding, translator’s sectarianism, intralingual translation, and censorship for political, social and religious purposes. As a result, the investable linguistic and cultural dissimilarities between languages, for instance, between English and Arabic, make absolute fidelity impossible, and infidelity in its two kinds, i.e. intended and unintended, unavoidable.

Keywords: deliberate adaptation, intended infidelity, literary translation, unintended infidelity

Procedia PDF Downloads 415
200 Pre-Service Teachers’ Reasoning and Sense Making of Variables

Authors: Olteanu Constanta, Olteanu Lucian

Abstract:

Researchers note that algebraic reasoning and sense making is essential for building conceptual knowledge in school mathematics. Consequently, pre-service teachers’ own reasoning and sense making are useful in fostering and developing students’ algebraic reasoning and sense making. This article explores the forms of reasoning and sense making that pre-service mathematics teachers exhibit and use in the process of analysing problem-posing tasks with a focus on first-degree equations. Our research question concerns the characteristics of the problem-posing tasks used for reasoning and sense making of first-degree equations as well as the characteristics of pre-service teachers’ reasoning and sense making in problem-posing tasks. The analyses are grounded in a post-structuralist philosophical perspective and variation theory. Sixty-six pre-service primary teachers participated in the study. The results show that the characteristics of reasoning in problem-posing tasks and of pre-service teachers are selecting, exploring, reconfiguring, encoding, abstracting and connecting. The characteristics of sense making in problem-posing tasks and of pre-service teachers are recognition, relationships, profiling, comparing, laddering and verifying. Beside this, the connection between reasoning and sense making is rich in line of flight in problem-posing tasks, while the connection is rich in line of rupture for pre-service teachers.

Keywords: first-degree equations, problem posing, reasoning, rhizomatic assemblage, sense-making, variation theory

Procedia PDF Downloads 81