Search results for: target hiding
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2711

Search results for: target hiding

2351 Bionaut™: A Breakthrough Robotic Microdevice to Treat Non-Communicating Hydrocephalus in Both Adult and Pediatric Patients

Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher

Abstract:

Bionaut Labs, LLC is developing a minimally invasive robotic microdevice designed to treat non-communicating hydrocephalus in both adult and pediatric patients. The device utilizes biocompatible microsurgical particles (Bionaut™) that are specifically designed to safely and reliably perform accurate fenestration(s) in the 3rd ventricle, aqueduct of Sylvius, and/or trapped intraventricular cysts of the brain in order to re-establish normal cerebrospinal fluid flow dynamics and thereby balance and/or normalize intra/intercompartmental pressure. The Bionaut™ is navigated to the target via CSF or brain tissue in a minimally invasive fashion with precise control using real-time imaging. Upon reaching the pre-defined anatomical target, the external driver allows for directing the specific microsurgical action defined to achieve the surgical goal. Notable features of the proposed protocol are i) Bionaut™ access to the intraventricular target follows a clinically validated endoscopy trajectory which may not be feasible via ‘traditional’ rigid endoscopy: ii) the treatment is microsurgical, there are no foreign materials left behind post-procedure; iii) Bionaut™ is an untethered device that is navigated through the subarachnoid and intraventricular compartments of the brain, following pre-designated non-linear trajectories as determined by the safest anatomical and physiological path; iv) Overall protocol involves minimally invasive delivery and post-operational retrieval of the surgical Bionaut™. The approach is expected to be suitable to treat pediatric patients 0-12 months old as well as adult patients with obstructive hydrocephalus who fail traditional shunts or are eligible for endoscopy. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.

Keywords: Bionaut™, cerebrospinal fluid, CSF, fenestration, hydrocephalus, micro-robot, microsurgery

Procedia PDF Downloads 145
2350 Exo-III Assisted Amplification Strategy through Target Recycling of Hg²⁺ Detection in Water: A GNP Based Label-Free Colorimetry Employing T-Rich Hairpin-Loop Metallobase

Authors: Abdul Ghaffar Memon, Xiao Hong Zhou, Yunpeng Xing, Ruoyu Wang, Miao He

Abstract:

Due to deleterious environmental and health effects of the Hg²⁺ ions, various online, detection methods apart from the traditional analytical tools have been developed by researchers. Biosensors especially, label, label-free, colorimetric and optical sensors have advanced with sensitive detection. However, there remains a gap of ultrasensitive quantification as noise interact significantly especially in the AuNP based label-free colorimetry. This study reported an amplification strategy using Exo-III enzyme for target recycling of Hg²⁺ ions in a T-rich hairpin loop metallobase label-free colorimetric nanosensor with an improved sensitivity using unmodified gold nanoparticles (uGNPs) as an indicator. The two T-rich metallobase hairpin loop structures as 5’- CTT TCA TAC ATA GAA AAT GTA TGT TTG -3 (HgS1), and 5’- GGC TTT GAG CGC TAA GAA A TA GCG CTC TTT G -3’ (HgS2) were tested in the study. The thermodynamic properties of HgS1 and HgS2 were calculated using online tools (http://biophysics.idtdna.com/cgi-bin/meltCalculator.cgi). The lab scale synthesized uGNPs were utilized in the analysis. The DNA sequence had T-rich bases on both tails end, which in the presence of Hg²⁺ forms a T-Hg²⁺-T mismatch, promoting the formation of dsDNA. Later, the Exo-III incubation enable the enzyme to cleave stepwise mononucleotides from the 3’ end until the structure become single-stranded. These ssDNA fragments then adsorb on the surface of AuNPs in their presence and protect AuNPs from the induced salt aggregation. The visible change in color from blue (aggregation stage in the absence of Hg²⁺) and pink (dispersion state in the presence of Hg²⁺ and adsorption of ssDNA fragments) can be observed and analyzed through UV spectrometry. An ultrasensitive quantitative nanosensor employing Exo-III assisted target recycling of mercury ions through label-free colorimetry with nanomolar detection using uGNPs have been achieved and is further under the optimization to achieve picomolar range by avoiding the influence of the environmental matrix. The proposed strategy will supplement in the direction of uGNP based ultrasensitive, rapid, onsite, label-free colorimetric detection.

Keywords: colorimetric, Exo-III, gold nanoparticles, Hg²⁺ detection, label-free, signal amplification

Procedia PDF Downloads 289
2349 In Silico Analysis of Salivary miRNAs to Identify the Diagnostic Biomarkers for Oral Cancer

Authors: Andleeb Zahra, Itrat Rubab, Sumaira Malik, Amina Khan, Muhammad Jawad Khan, M. Qaiser Fatmi

Abstract:

Oral squamous cell carcinoma (OSCC) is one of the most common cancers worldwide. Recent studies have highlighted the role of miRNA in disease pathology, indicating its potential use in an early diagnostic tool. miRNAs are small, double stranded, non-coding RNAs that regulate gene expression by deregulating mRNAs. miRNAs play important roles in modifying various cellular processes such as cell growth, differentiation, apoptosis, and immune response. Dis-regulated expression of miRNAs is known to affect the cell growth, and this may function as tumor suppressors or oncogenes in various cancers. Objectives: The main objectives of this study were to characterize the extracellular miRNAs involved in oral cancer (OC) to assist early detection of cancer as well as to propose a list of genes that can potentially be used as biomarkers of OC. We used gene expression data by microarrays already available in literature. Materials and Methods: In the first step, a total of 318 miRNAs involved in oral carcinoma were shortlisted followed by the prediction of their target genes. Simultaneously, the differentially expressed genes (DEGs) of oral carcinoma from all experiments were identified. The common genes between lists of DEGs of OC based on experimentally proven data and target genes of each miRNA were identified. These common genes are the targets of specific miRNA, which is involved in OC. Finally, a list of genes was generated which may be used as biomarker of OC. Results and Conclusion: In results, we included some of pathways in cancer to show the change in gene expression under the control of specific miRNA. Ingenuity pathway analysis (IPA) provided a list of major biomarkers like CDH2, CDK7 and functional enrichment analysis identified the role of miRNA in major pathways like cell adhesion molecules pathway affected by cancer. We observed that at least 25 genes are regulated by maximum number of miRNAs, and thereby, they can be used as biomarkers of OC. To better understand the role of miRNA with respect to their target genes further experiments are required, and our study provides a platform to better understand the miRNA-OC relationship at genomics level.

Keywords: biomarkers, gene expression, miRNA, oral carcinoma

Procedia PDF Downloads 349
2348 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 55
2347 High Motivational Salient Face Distractors Slowed Target Detection: Evidence from Behavioral Studies

Authors: Rashmi Gupta

Abstract:

Rewarding stimuli capture attention involuntarily as a result of an association process that develops quickly during value learning, referred to as the reward or value-driven attentional capture. It is essential to compare reward with punishment processing to get a full picture of value-based modulation in visual attention processing. Hence, the present study manipulated both valence/value (reward as well as punishment) and motivational salience (probability of an outcome: high vs. low) together. Series of experiments were conducted, and there were two phases in each experiment. In phase 1, participants were required to learn to associate specific face stimuli with a high or low probability of winning or losing points. In the second phase, these conditioned stimuli then served as a distractor or prime in a speeded letter search task. Faces with high versus low outcome probability, regardless of valence, slowed the search for targets (specifically the left visual field target) and suggesting that the costs to performance on non-emotional cognitive tasks were only driven by motivational salience (high vs. loss) associated with the stimuli rather than the valence (gain vs. loss). It also suggests that the processing of motivationally salient stimuli is right-hemisphere biased. Together, results of these studies strengthen the notion that our visual attention system is more sensitive to affected by motivational saliency rather than valence, which termed here as motivational-driven attentional capture.

Keywords: attention, distractors, motivational salience, valence

Procedia PDF Downloads 200
2346 Quantifying the Protein-Protein Interaction between the Ion-Channel-Forming Colicin A and the Tol Proteins by Potassium Efflux in E. coli Cells

Authors: Fadilah Aleanizy

Abstract:

Colicins are a family of bacterial toxins that kill Escherichia coli and other closely related species. The mode of action of colicins involves binding to an outer membrane receptor and translocation across the cell envelope, leading to cytotoxicity through specific targets. The mechanism of colicin cytotoxicity includes a non-specific endonuclease activity or depolarization of the cytoplasmic membrane by pore-forming activity. For Group A colicins, translocation requires an interaction between the N-terminal domain of the colicin and a series of membrane- bound and periplasmic proteins known as the Tol system (TolB, TolR, TolA, TolQ, and Pal and the active domain must be translocated through the outer membranes. Protein-protein interactions are intrinsic to virtually every cellular process. The transient protein-protein interactions of the colicin include the interaction with much more complicated assemblies during colicin translocation across the cellular membrane to its target. The potassium release assay detects variation in the K+ content of bacterial cells (K+in). This assays is used to measure the effect of pore-forming colicins such as ColA on an indicator organism by measuring the changes of the K+ concentration in the external medium (K+out ) that are caused by cell killing with a K+ selective electrode. One of the goals of this work is to employ a quantifiable in-vivo method to spot which Tol protein are more implicated in the interaction with colicin A as it is translocated to its target.

Keywords: K+ efflux, Colicin A, Tol-proteins, E. coli

Procedia PDF Downloads 383
2345 The Effect of Hydroxyl Ethyl Cellulose (HEC) and Hydrophobically-Modified Alkali Soluble Emulsions (HASE) on the Properties and Quality of Water Based Paints

Authors: Haleden Chiririwa, Sandile S. Gwebu

Abstract:

The coatings industry is a million dollar business, and it is easy and inexpensive to set-up but it is growing very slowly in developing countries, and this study developed a paint formulation which gives better quality and good application properties. The effect of rheology modifiers, i.e. non-ionic polymers hydrophobically-modified ethoxylated urethanes (HEUR), anionic polymers hydrophobically-modified alkali soluble emulsions (HASE) and hydroxyl ethyl cellulose (HEC) on the quality and properties of water-based paints have been investigated. HEC provides the in-can viscosity and increases open working time while HASE improves application properties like spatter resistance and brush loading and HEUR provides excellent scrub resistance. Four paint recipes were prepared using four different thickeners HEC, HASE (carbopol) and Cellulose nitrate. The fourth formulation was thickened with a combination of HASE and HEC, this aimed at improving quality and at the same time reducing cost. The four samples were tested for quality tests such viscosity, sag resistance, volatile matter, tinter effect, drying times, hiding power, scrub resistance and stability on storage. Environmental factors were incorporated in the attempt to formulate an economic and green product. Hydroxyl ethyl cellulose and cellulose nitrate gave high quality and good properties of the paint. HEC and Cellulose nitrate showed stability on storage whereas carbopol thickener was very unstable.

Keywords: properties, thickeners, rheology modifiers, water based paints

Procedia PDF Downloads 246
2344 Anti-Forensic Countermeasure: An Examination and Analysis Extended Procedure for Information Hiding of Android SMS Encryption Applications

Authors: Ariq Bani Hardi

Abstract:

Empowerment of smartphone technology is growing very rapidly in various fields of science. One of the mobile operating systems that dominate the smartphone market today is Android by Google. Unfortunately, the expansion of mobile technology is misused by criminals to hide the information that they store or exchange with each other. It makes law enforcement more difficult to prove crimes committed in the judicial process (anti-forensic). One of technique that used to hide the information is encryption, such as the usages of SMS encryption applications. A Mobile Forensic Examiner or an investigator should prepare a countermeasure technique if he finds such things during the investigation process. This paper will discuss an extension procedure if the investigator found unreadable SMS in android evidence because of encryption. To define the extended procedure, we create and analyzing a dataset of android SMS encryption application. The dataset was grouped by application characteristics related to communication permissions, as well as the availability of source code and the documentation of encryption scheme. Permissions indicate the possibility of how applications exchange the data and keys. Availability of the source code and the encryption scheme documentation can show what the cryptographic algorithm specification is used, how long the key length, how the process of key generation, key exchanges, encryption/decryption is done, and other related information. The output of this paper is an extended or alternative procedure for examination and analysis process of android digital forensic. It can be used to help the investigators while they got a confused cause of SMS encryption during examining and analyzing. What steps should the investigator take, so they still have a chance to discover the encrypted SMS in android evidence?

Keywords: anti-forensic countermeasure, SMS encryption android, examination and analysis, digital forensic

Procedia PDF Downloads 113
2343 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 293
2342 Readability Facing the Irreducible Otherness: Translation as a Third Dimension toward a Multilingual Higher Education

Authors: Noury Bakrim

Abstract:

From the point of view of language morphodynamics, interpretative Readability of the text-result (the stasis) is not the external hermeneutics of its various potential reading events but the paradigmatic, semantic immanence of its dynamics. In other words, interpretative Readability articulates the potential tension between projection (intentionality of the discursive event) and the result (Readability within the syntagmatic stasis). We then consider that translation represents much more a metalinguistic conversion of neurocognitive bilingual sub-routines and modular relations than a semantic equivalence. Furthermore, the actualizing Readability (the process of rewriting a target text within a target language/genre) builds upon the descriptive level between the generative syntax/semantic from and its paradigmatic potential translatability. Translation corpora reveal the evidence of a certain focusing on the positivist stasis of the source text at the expense of its interpretative Readability. For instance, Fluchere's brilliant translation of Miller's Tropic of cancer into French realizes unconsciously an inversion of the hierarchical relations between Life Thought and Fable: From Life Thought (fable) into Fable (Life Thought). We could regard the translation of Bernard Kreiss basing on Canetti's work die englischen Jahre (les annees anglaises) as another inversion of the historical scale from individual history into Hegelian history. In order to describe and test both translation process and result, we focus on the pedagogical practice which enables various principles grounding in interpretative/actualizing Readability. Henceforth, establishing the analytical uttering dynamics of the source text could be widened by other practices. The reversibility test (target - source text) or the comparison with a second translation in a third language (tertium comparationis A/B and A/C) point out the evidence of an impossible event. Therefore, it doesn't imply an uttering idealistic/absolute source but the irreducible/non-reproducible intentionality of its production event within the experience of world/discourse. The aim of this paper is to conceptualize translation as the tension between interpretative and actualizing Readability in a new approach grounding in morphodynamics of language and Translatability (mainly into French) within literary and non-literary texts articulating theoretical and described pedagogical corpora.

Keywords: readability, translation as deverbalization, translation as conversion, Tertium Comparationis, uttering actualization, translation pedagogy

Procedia PDF Downloads 144
2341 Using Authentic and Instructional Materials to Support Intercultural Communicative Competence in ELT

Authors: Jana Beresova

Abstract:

The paper presents a study carried out in 2015-2016 within the national scheme of research - VEGA 1/0106/15 based on theoretical research and empirical verification of the concept of intercultural communicative competence. It focuses on the current conception concerning target languages teaching compatible with the Common European Framework of Reference for Languages: Learning, teaching, assessment. Our research had revealed how the concept of intercultural communicative competence had been perceived by secondary-school teachers of English in Slovakia before they were intensively trained. Intensive workshops were based on the use of both authentic and instructional materials with the goal to support interculturally oriented language teaching aimed at challenging thinking. The former concept that supported the development of the students´ linguistic knowledge and the use of a target language to obtain information about the culture of the country whose language learners were learning was expanded by the meaning-making framework which views language as a typical means by which culture is mediated. The goal of the workshop was to influence English teachers to better understand the concept of intercultural communicative competence, combining theory and practice optimally. The results of the study will be presented and analysed, providing particular recommendations for language teachers and suggesting some changes in the National Educational Programme from which English learners should benefit in their future studies or professional careers.

Keywords: authentic materials, English language teaching, instructional materials, intercultural communicative competence

Procedia PDF Downloads 245
2340 Triple Modulation on Wound Healing in Glaucoma Surgery Using Mitomycin C and Ologen Augmented with Anti-Vascular Endothelial Growth Factor

Authors: Reetika Sharma, Lalit Tejwani, Himanshu Shekhar, Arun Singhvi

Abstract:

Purpose: To describe a novel technique of trabeculectomy targeting triple modulation on wound healing to increase the overall success rate. Method: Ten eyes of 10 patients underwent trabeculectomy with subconjunctival mitomycin C (0.4mg/ml for 4 minutes) application combined with Ologen implantation subconjunctivally and subsclerally. Five of these patients underwent additional phacoemulsification with intraocular lens implantation. The Ologen implant was wet with 0.1 ml Bevacizumab. Result: All the eyes achieved target intraocular pressure (IOP), which was maintained until one year of follow-up. Two patients needed anterior chamber reformation at day two post surgery. One patient needed cataract surgery after four months of surgery and achieved target intraocular pressure on two topical antiglaucoma medicines. Conclusion: Vascular endothelial growth factor (VEGF) concentration has been seen to increase in the aqueous humor after filtration surgery. Ologen implantation helps in collagen remodelling, antifibroblastic response, and acts as a spacer. Bevacizumab augmented Ologen, in addition, targets the increased VEGF and helps in decreasing scarring. Anti-VEGF augmented Ologen in trabeculectomy with mitomycin C (MMC) hence appears to have encouraging short-term intraocular pressure control.

Keywords: ologen, anti-VEGF, trabeculectomy, scarring

Procedia PDF Downloads 164
2339 A pH-Activatable Nanoparticle Self-Assembly Triggered by 7-Amino Actinomycin D Demonstrating Superior Tumor Fluorescence Imaging and Anticancer Performance

Authors: Han Xiao

Abstract:

The development of nanomedicines has recently achieved several breakthroughs in the field of cancer treatment; however, the biocompatibility and targeted burst release of these medications remain a limitation, which leads to serious side effects and significantly narrows the scope of their applications. The self-assembly of intermediate filament protein (IFP) peptides was triggered by a hydrophobic cation drug 7-amino actinomycin D (7-AAD) to synthesize pH-activatable nanoparticles (NPs) that could simultaneously locate tumors and produce antitumor effects. The designed IFP peptide included a target peptide (arginine–glycine–aspartate), a negatively charged region, and an α-helix sequence. It also possessed the ability to encapsulate 7-AAD molecules through the formation of hydrogen bonds and hydrophobic interactions by a one-step method. 7-AAD molecules with excellent near-infrared fluorescence properties could be target delivered into tumor cells by NPs and released immediately in the acidic environments of tumors and endosome/lysosomes, ultimately inducing cytotoxicity by arresting the tumor cell cycle with inserted DNA. It is noteworthy that the IFP/7-AAD NPs tail vein injection approach demonstrated not only high tumor-targeted imaging potential, but also strong antitumor therapeutic effects in vivo. The proposed strategy may be used in the delivery of cationic antitumor drugs for precise imaging and cancer therapy.

Keywords: 7-amino actinomycin D, intermediate filament protein, nanoparticle, tumor image

Procedia PDF Downloads 110
2338 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease

Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang

Abstract:

Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.

Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation

Procedia PDF Downloads 44
2337 The Molecule Preserve Environment: Effects of Inhibitor of the Angiotensin Converting Enzyme on Reproductive Potential and Composition Contents of the Mediterranean Flour Moth, Ephestia kuehniella Zeller

Authors: Yezli-Touiker Samira, Amrani-Kirane Leila, Soltani Mazouni Nadia

Abstract:

Due to secondary effects of conventional insecticides on the environment, the agrochemical research has resulted in the discovery of novel molecules. That research work will help in the development of a new group of pesticides that may be cheaper and less hazardous to the environment and non-target organisms which is the main desired outcome of the present work. Angiotensin-converting enzyme as a target for the development of novel insect growth regulators. Captopril is an inhibitor of angiotensin converting enzyme (ACE) it was tested in vivo by topical application on reproduction of Ephestia kuehniella Zeller (Lepidoptera: Pyralidae). The compound is diluted in acetone and applied topically to newly emerged pupae (10µg/ 2µl). The effects of this molecule was studied,on the biochemistry of ovary (on amounts nucleic acid, proteins, the qualitative analysis of the ovarian proteins and the reproductive potential (duration of the pre-oviposition, duration of the oviposition, number of eggs laid and hatching percentage). Captopril reduces significantly quantity of ovarian proteins and nucleic acid. The electrophoresis profile reveals the absence of tree bands at the treated series. This molecule reduced the duration of the oviposition period, the fecundity and the eggviability.

Keywords: environment, ephestia kuehniella, captopril, reproduction, the agrochemical research

Procedia PDF Downloads 261
2336 Synthesis and Tribological Properties of the Al-Cr-N/MoS₂ Self-Lubricating Coatings by Hybrid Magnetron Sputtering

Authors: Tie-Gang Wang, De-Qiang Meng, Yan-Mei Liu

Abstract:

Ternary AlCrN coatings were widely used to prolong cutting tool life because of their high hardness and excellent abrasion resistance. However, the friction between the workpiece and cutter surface was increased remarkably during machining difficult-to-cut materials (such as superalloy, titanium, etc.). As a result, a lot of cutting heat was generated and cutting tool life was shortened. In this work, an appropriate amount of solid lubricant MoS₂ was added into the AlCrN coating to reduce the friction between the tool and the workpiece. A series of Al-Cr-N/MoS₂ self-lubricating coatings with different MoS₂ contents were prepared by high power impulse magnetron sputtering (HiPIMS) and pulsed direct current magnetron sputtering (Pulsed DC) compound system. The MoS₂ content in the coatings was changed by adjusting the sputtering power of the MoS₂ target. The composition, structure and mechanical properties of the Al-Cr-N/MoS2 coatings were systematically evaluated by energy dispersive spectrometer, scanning electron microscopy, X-ray photoelectron spectroscopy, X-ray diffractometer, nano-indenter tester, scratch tester, and ball-on-disk tribometer. The results indicated the lubricant content played an important role in the coating properties. As the sputtering power of the MoS₂ target was 0.1 kW, the coating possessed the highest hardness 14.1GPa, the highest critical load 44.8 N, and the lowest wear rate 4.4×10−3μm2/N.

Keywords: self-lubricating coating, Al-Cr-N/MoS₂ coating, wear rate, friction coefficient

Procedia PDF Downloads 106
2335 Examination of Forged Signatures Printed by Means of Fabrication in Terms of Their Relation to the Perpetrator

Authors: Salim Yaren, Nergis Canturk

Abstract:

Signatures are signs that are handwritten by person in order to confirm values such as information, amount, meaning, time and undertaking that bear on a document. It is understood that the signature of a document and the accuracy of the information on the signature is accepted and approved. Forged signatures are formed by forger without knowing and seeing original signature of person that forger will imitate and as a result of his/her effort for hiding typical characteristics of his/her own signatures. Forged signatures are often signed by starting with the initials of the first and last name or persons of the persons whose fake signature will be signed. The similarities in the signatures are completely random. Within the scope of the study, forged signatures are collected from 100 people both their original signatures and forged signatures signed referring to 5 imaginary people. These signatures are compared for 14 signature analyzing criteria by 2 signature analyzing experts except the researcher. 1 numbered analyzing expert who is 9 year experience in his/her field evaluated signatures of 39 (39%) people right and of 25 (25%) people wrong and he /she made any evaluations for signatures of 36 (36%) people. 2 numbered analyzing expert who is 16 year experienced in his/her field evaluated signatures of 49 (49%) people right and 28 (28%) people wrong and he /she made any evaluations for signatures of 23 (23%) people. Forged signatures that are signed by 24 (24%) people are matched by two analyzing experts properly, forged signatures that are signed by 8 (8%) people are matched wrongfully and made up signatures that are signed by 12 (12%) people couldn't be decided by both analyzing experts. Signatures analyzing is a subjective topic so that analyzing and comparisons take form according to education, knowledge and experience of the expert. Consequently, due to the fact that 39% success is achieved by analyzing expert who has 9 year professional experience and 49% success is achieved by analyzing expert who has 16 year professional experience, it is seen that success rate is directly proportionate to knowledge and experience of the expert.

Keywords: forensic signature, forensic signature analysis, signature analysis criteria, forged signature

Procedia PDF Downloads 104
2334 Ideology Shift in Political Translation

Authors: Jingsong Ma

Abstract:

In political translation, ideology plays an important role in conveying implications accurately. Ideological collisions can occur in political translation when there existdifferences of political environments embedded in the translingual political texts in both source and target languages. To reach an accurate translationrequires the translatorto understand the ideologies implied in (and often transcending) the texts. This paper explores the conditions, procedure, and purpose of processingideological collision and resolution of such issues in political translation. These points will be elucidated by case studies of translating English and Chinese political texts. First, there are specific political terminologies in certain political environments. These terminological peculiarities in one language are often determined by ideological elements rather than by syntactical and semantical understanding. The translation of these ideological-loaded terminologiesis a process and operation consisting of understanding the ideological context, including cultural, historical, and political situations. This will be explained with characteristic Chinese political terminologies and their renderings in English. Second, when the ideology in the source language fails to match with the ideology in the target language, the decisions to highlight or disregard these conflicts are shaped by power relations, political engagement, social context, etc. It thus is necessary to go beyond linguisticanalysis of the context by deciphering ideology in political documents to provide a faithful or equivalent rendering of certain messages. Finally, one of the practical issues is about equivalence in political translation by redefining the notion of faithfulness and retainment of ideological messages in the source language in translations of political texts. To avoid distortion, the translator should be liberated from grip the literal meaning, instead diving into functional meanings of the text.

Keywords: translation, ideology, politics, society

Procedia PDF Downloads 89
2333 Case Analysis of Bamboo Based Social Enterprises in India-Improving Profitability and Sustainability

Authors: Priyal Motwani

Abstract:

The current market for bamboo products in India is about Rs. 21000 crores and is highly unorganised and fragmented. In this study, we have closely analysed the structure and functions of a major bamboo craft based organisation in Kerela, India and elaborated about its value chain, product mix, pricing strategy and supply chain, collaborations and competitive landscape. We have identified six major bottlenecks that are prevalent in such organisations, based on the Indian context, in relation to their product mix, asset management, and supply chain- corresponding waste management and retail network. The study has identified that the target customers for the bamboo based products and alternative revenue streams (eco-tourism, microenterprises, training), by carrying out secondary and primary research (5000 sample space), that can boost the existing revenue by 150%. We have then recommended an optimum product mix-covering premium, medium and low valued processing, for medium sized bamboo based organisations, in accordance with their capacity to maximize their revenue potential. After studying such organisations and their counter parts, the study has established an optimum retail network, considering B2B, B2C physical and online retail, to maximize their sales to their target groups. On the basis of the results obtained from the analysis of the future and present trends, our study gives recommendations to improve the revenue potential of bamboo based organisation in India and promote sustainability.

Keywords: bamboo, bottlenecks, optimization, product mix, retail network, value chain

Procedia PDF Downloads 193
2332 Path Planning for Unmanned Aerial Vehicles in Constrained Environments for Locust Elimination

Authors: Aadiv Shah, Hari Nair, Vedant Mittal, Alice Cheeran

Abstract:

Present-day agricultural practices such as blanket spraying not only lead to excessive usage of pesticides but also harm the overall crop yield. This paper introduces an algorithm to optimize the traversal of an unmanned aerial vehicle (UAV) in constrained environments. The proposed system focuses on the agricultural application of targeted spraying for locust elimination. Given a satellite image of a farm, target zones that are prone to locust swarm formation are detected through the calculation of the normalized difference vegetation index (NDVI). This is followed by determining the optimal path for traversal of a UAV through these target zones using the proposed algorithm in order to perform pesticide spraying in the most efficient manner possible. Unlike the classic travelling salesman problem involving point-to-point optimization, the proposed algorithm determines an optimal path for multiple regions, independent of its geometry. Finally, the paper explores the idea of implementing reinforcement learning to model complex environmental behaviour and make the path planning mechanism for UAVs agnostic to external environment changes. This system not only presents a solution to the enormous losses incurred due to locust attacks but also an efficient way to automate agricultural practices across the globe in order to improve farmer ergonomics.

Keywords: locust, NDVI, optimization, path planning, reinforcement learning, UAV

Procedia PDF Downloads 226
2331 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment

Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla

Abstract:

Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.

Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity

Procedia PDF Downloads 136
2330 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 139
2329 An Inverse Docking Approach for Identifying New Potential Anticancer Targets

Authors: Soujanya Pasumarthi

Abstract:

Inverse docking is a relatively new technique that has been used to identify potential receptor targets of small molecules. Our docking software package MDock is well suited for such an application as it is both computationally efficient, yet simultaneously shows adequate results in binding affinity predictions and enrichment tests. As a validation study, we present the first stage results of an inverse-docking study which seeks to identify potential direct targets of PRIMA-1. PRIMA-1 is well known for its ability to restore mutant p53's tumor suppressor function, leading to apoptosis in several types of cancer cells. For this reason, we believe that potential direct targets of PRIMA-1 identified in silico should be experimentally screened for their ability to inhibitcancer cell growth. The highest-ranked human protein of our PRIMA-1 docking results is oxidosqualene cyclase (OSC), which is part of the cholesterol synthetic pathway. The results of two followup experiments which treat OSC as a possible anti-cancer target are promising. We show that both PRIMA-1 and Ro 48-8071, a known potent OSC inhibitor, significantly reduce theviability of BT-474 breast cancer cells relative to normal mammary cells. In addition, like PRIMA-1, we find that Ro 48-8071 results in increased binding of mutant p53 to DNA in BT- 474cells (which highly express p53). For the first time, Ro 48-8071 is shown as a potent agent in killing human breast cancer cells. The potential of OSC as a new target for developing anticancer therapies is worth further investigation.

Keywords: inverse docking, in silico screening, protein-ligand interactions, molecular docking

Procedia PDF Downloads 415
2328 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 113
2327 Rapid, Label-Free, Direct Detection and Quantification of Escherichia coli Bacteria Using Nonlinear Acoustic Aptasensor

Authors: Shilpa Khobragade, Carlos Da Silva Granja, Niklas Sandström, Igor Efimov, Victor P. Ostanin, Wouter van der Wijngaart, David Klenerman, Sourav K. Ghosh

Abstract:

Rapid, label-free and direct detection of pathogenic bacteria is critical for the prevention of disease outbreaks. This paper for the first time attempts to probe the nonlinear acoustic response of quartz crystal resonator (QCR) functionalized with specific DNA aptamers for direct detection and quantification of viable E. coli KCTC 2571 bacteria. DNA aptamers were immobilized through biotin and streptavidin conjugation, onto the gold surface of QCR to capture the target bacteria and the detection was accomplished by shift in amplitude of the peak 3f signal (3 times the drive frequency) upon binding, when driven near fundamental resonance frequency. The developed nonlinear acoustic aptasensor system demonstrated better reliability than conventional resonance frequency shift and energy dissipation monitoring that were recorded simultaneously. This sensing system could directly detect 10⁽⁵⁾ cells/mL target bacteria within 30 min or less and had high specificity towards E. coli KCTC 2571 bacteria as compared to the same concentration of S.typhi bacteria. Aptasensor response was observed for the bacterial suspensions ranging from 10⁽⁵⁾-10⁽⁸⁾ cells/mL. Conclusively, this nonlinear acoustic aptasensor is simple to use, gives real-time output, cost-effective and has the potential for rapid, specific, label-free direction detection of bacteria.

Keywords: acoustic, aptasensor, detection, nonlinear

Procedia PDF Downloads 538
2326 Classification of EEG Signals Based on Dynamic Connectivity Analysis

Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović

Abstract:

In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.

Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients

Procedia PDF Downloads 176
2325 Indoor Air Pollution of the Flexographic Printing Environment

Authors: Jelena S. Kiurski, Vesna S. Kecić, Snežana M. Aksentijević

Abstract:

The identification and evaluation of organic and inorganic pollutants were performed in a flexographic facility in Novi Sad, Serbia. Air samples were collected and analyzed in situ, during 4-hours working time at five sampling points by the mobile gas chromatograph and ozonometer at the printing of collagen casing. Experimental results showed that the concentrations of isopropyl alcohol, acetone, total volatile organic compounds and ozone varied during the sampling times. The highest average concentrations of 94.80 ppm and 102.57 ppm were achieved at 200 minutes from starting the production for isopropyl alcohol and total volatile organic compounds, respectively. The mutual dependences between target hazardous and microclimate parameters were confirmed using a multiple linear regression model with software package STATISTICA 10. Obtained multiple coefficients of determination in the case of ozone and acetone (0.507 and 0.589) with microclimate parameters indicated a moderate correlation between the observed variables. However, a strong positive correlation was obtained for isopropyl alcohol and total volatile organic compounds (0.760 and 0.852) with microclimate parameters. Higher values of parameter F than Fcritical for all examined dependences indicated the existence of statistically significant difference between the concentration levels of target pollutants and microclimates parameters. Given that, the microclimate parameters significantly affect the emission of investigated gases and the application of eco-friendly materials in production process present a necessity.

Keywords: flexographic printing, indoor air, multiple regression analysis, pollution emission

Procedia PDF Downloads 172
2324 The Views of German Preparatory Language Programme Students about German Speaking Activity

Authors: Eda Üstünel, Seval Karacabey

Abstract:

The students, who are enrolled in German Preparatory Language Programme at the School of Foreign Languages, Muğla Sıtkı Koçman University, Turkey, learn German as a foreign language for two semesters in an academic year. Although the language programme is a skills-based one, the students lack German speaking skills due to their fear of making language mistakes while speaking in German. This problem of incompetency in German speaking skills exists also in their four-year departmental study at the Faculty of Education. In order to address this problem we design German speaking activities, which are extra-curricular activities. With the help of these activities, we aim to lead Turkish students of German language to speak in the target language, to improve their speaking skills in the target language and to create a stress-free atmosphere and a meaningful learning environment to communicate in the target language. In order to achieve these aims, an ERASMUS+ exchange staff (a German trainee teacher of German as a foreign language), who is from Schwabisch Gmünd University, Germany, conducted out-of-class German speaking activities once a week for three weeks in total. Each speaking activity is lasted for one and a half hour per week. 7 volunteered students of German preparatory language programme attended the speaking activity for three weeks. The activity took place at a cafe in the university campus, that’s the reason, we call it as an out-of-class activity. The content of speaking activity is not related to the topics studied at the units of coursebook, that’s the reason, we call this activity as extra-curricular one. For data collection, three tools are used. A questionnaire, which is an adapted version of Sabo’s questionnaire, is applied to seven volunteers. An interview session is then held with each student on individual basis. The interview questions are developed so as to ask students to expand their answers that are given at the questionnaires. The German trainee teacher wrote fieldnotes, in which the teacher described the activity in the light of her thoughts about what went well and which areas were needed to be improved. The results of questionnaires show that six out of seven students note that such an acitivity must be conducted by a native speaker of German. Four out of seven students emphasize that they like the way that the activities are designed in a learner-centred fashion. All of the students point out that they feel motivated to talk to the trainee teacher in German. Six out of seven students note that the opportunity to communicate in German with the teacher and the peers enable them to improve their speaking skills, the use of grammatical rules and the use of vocabulary.

Keywords: Learning a Foreign Language, Speaking Skills, Teaching German as a Foreign Language, Turkish Learners of German Language

Procedia PDF Downloads 297
2323 Nation Branding as Reframing: From the Perspective of Translation Studies

Authors: Ye Tian

Abstract:

Soft power has replaced hard power and become one of the most attractive ways nations pursue to expand their international influence. One of the ways to improve a nation’s soft power is to commercialise the country and brand or rebrand it to the international audience, and thus attract interests or foreign investments. In this process, translation has often been regarded as merely a tool, and researches in it are either in translating literature as culture export or in how (in)accuracy of translation influences the branding campaign. This paper proposes to analyse nation branding campaign with framing theory, and thus gives an entry for translation studies to come to a central stage in today’s soft power research. To frame information or elements of a text, an event, or, as in this paper, a nation is to put them in a mental structure. This structure can be built by outsiders or by those who create the text, the event, or by citizens of the nation. To frame information like this can be regarded as a process of translation, as what translation does in its traditional meaning of ‘translating a text’ is to put a framework on the text to, deliberately or not, highlight some of the elements while hiding the others. In the discourse of nations, then, people unavoidably simplify a national image and put the nation into their imaginary framework. In this way, problems like stereotype and prejudice come into being. Meanwhile, if nations seek ways to frame or reframe themselves, they make efforts to have in control what and who they are in the eyes of international audiences, and thus make profits, economically or politically, from it. The paper takes African nations, which are usually perceived as a whole, and the United Kingdom as examples to justify passive and active framing process, and assesses both positive and negative influence framing has on nations. In conclusion, translation as framing causes problems like prejudice, and the image of a nation is not always in the hands of nation branders, but reframing the nation in a positive way has the potential to turn the tide.

Keywords: framing, nation branding, stereotype, translation

Procedia PDF Downloads 131
2322 A Sui Generis Technique to Detect Pathogens in Post-Partum Breast Milk Using Image Processing Techniques

Authors: Yogesh Karunakar, Praveen Kandaswamy

Abstract:

Mother’s milk provides the most superior source of nutrition to a child. There is no other substitute to the mother’s milk. Postpartum secretions like breast milk can be analyzed on the go for testing the presence of any harmful pathogen before a mother can feed the child or donate the milk for the milk bank. Since breast feeding is one of the main causes for transmission of diseases to the newborn, it is mandatory to test the secretions. In this paper, we describe the detection of pathogens like E-coli, Human Immunodeficiency Virus (HIV), Hepatitis B (HBV), Hepatitis C (HCV), Cytomegalovirus (CMV), Zika and Ebola virus through an innovative method, in which we are developing a unique chip for testing the mother’s milk sample. The chip will contain an antibody specific to the target pathogen that will show a color change if there are enough pathogens present in the fluid that will be considered dangerous. A smart-phone camera will then be acquiring the image of the strip and using various image processing techniques we will detect the color development due to antigen antibody interaction within 5 minutes, thereby not adding to any delay, before the newborn is fed or prior to the collection of the milk for the milk bank. If the target pathogen comes positive through this method, then the health care provider can provide adequate treatment to bring down the number of pathogens. This will reduce the postpartum related mortality and morbidity which arises due to feeding infectious breast milk to own child.

Keywords: postpartum, fluids, camera, HIV, HCV, CMV, Zika, Ebola, smart-phones, breast milk, pathogens, image processing techniques

Procedia PDF Downloads 201