Search results for: target localization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2982

Search results for: target localization

432 Clustering-Based Computational Workload Minimization in Ontology Matching

Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris

Abstract:

In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.

Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching

Procedia PDF Downloads 230
431 Nanoimprinted-Block Copolymer-Based Porous Nanocone Substrate for SERS Enhancement

Authors: Yunha Ryu, Kyoungsik Kim

Abstract:

Raman spectroscopy is one of the most powerful techniques for chemical detection, but the low sensitivity originated from the extremely small cross-section of the Raman scattering limits the practical use of Raman spectroscopy. To overcome this problem, Surface Enhanced Raman Scattering (SERS) has been intensively studied for several decades. Because the SERS effect is mainly induced from strong electromagnetic near-field enhancement as a result of localized surface plasmon resonance of metallic nanostructures, it is important to design the plasmonic structures with high density of electromagnetic hot spots for SERS substrate. One of the useful fabrication methods is using porous nanomaterial as a template for metallic structure. Internal pores on a scale of tens of nanometers can be strong EM hotspots by confining the incident light. Also, porous structures can capture more target molecules than non-porous structures in a same detection spot thanks to the large surface area. Herein we report the facile fabrication method of porous SERS substrate by integrating solvent-assisted nanoimprint lithography and selective etching of block copolymer. We obtained nanostructures with high porosity via simple selective etching of the one microdomain of the diblock copolymer. Furthermore, we imprinted of the nanocone patterns into the spin-coated flat block copolymer film to make three-dimensional SERS substrate for the high density of SERS hot spots as well as large surface area. We used solvent-assisted nanoimprint lithography (SAIL) to reduce the fabrication time and cost for patterning BCP film by taking advantage of a solvent which dissolves both polystyrenre and poly(methyl methacrylate) domain of the block copolymer, and thus block copolymer film was molded under the low temperature and atmospheric pressure in a short time. After Ag deposition, we measured Raman intensity of dye molecules adsorbed on the fabricated structure. Compared to the Raman signals of Ag coated solid nanocone, porous nanocone showed 10 times higher Raman intensity at 1510 cm(-1) band. In conclusion, we fabricated porous metallic nanocone arrays with high density electromagnetic hotspots by templating nanoimprinted diblock copolymer with selective etching and demonstrated its capability as an effective SERS substrate.

Keywords: block copolymer, porous nanostructure, solvent-assisted nanoimprint, surface-enhanced Raman spectroscopy

Procedia PDF Downloads 600
430 Triazenes: Unearthing Their Hidden Arsenal Against Malaria and Microbial Menace

Authors: Frans J. Smit, Wisdom A. Munzeiwa, Hermanus C. M. Vosloo, Lyn-Marie Birkholtz, Richard K. Haynes

Abstract:

Malaria and antimicrobial infections remain significant global health concerns, necessitating the continuous search for novel therapeutic approaches. This abstract presents an overview of the potential use of triazenes as effective agents against malaria and various antimicrobial pathogens. Triazenes are a class of compounds characterized by a linear arrangement of three nitrogen atoms, rendering them structurally distinct from their cyclic counterparts. This study investigates the efficacy of triazenes against malaria and explores their antimicrobial activity. Preliminary results revealed significant antimalarial activity of the triazenes, as evidenced by in vitro screening against P. falciparum, the causative agent of malaria. Furthermore, the compounds exhibited broad-spectrum antimicrobial activity, indicating their potential as effective antimicrobial agents. These compounds have shown inhibitory effects on various essential enzymes and processes involved in parasite survival, replication, and transmission. The mechanism of action of triazenes against malaria involves interactions with critical molecular targets, such as enzymes involved in the parasite's metabolic pathways and proteins responsible for host cell invasion. The antimicrobial activity of the triazenes against bacteria and fungi was investigated through disc diffusion screening. The antimicrobial efficacy of triazenes has been observed against both Gram-positive and Gram-negative bacteria, as well as multidrug-resistant strains, making them potential candidates for combating drug-resistant infections. Furthermore, triazenes possess favourable physicochemical properties, such as good stability, solubility, and low toxicity, which are essential for drug development. The structural versatility of triazenes allows for the modification of their chemical composition to enhance their potency, selectivity, and pharmacokinetic properties. These modifications can be tailored to target specific pathogens, increasing the potential for personalized treatment strategies. In conclusion, this study highlights the potential of triazenes as promising candidates for the development of novel antimalarial and antimicrobial therapeutics. Further investigations are necessary to determine the structure-activity relationships and optimize the pharmacological properties of these compounds. The results warrant additional research, including MIC studies, to further explore the antimicrobial activity of the triazenes. Ultimately, these findings contribute to the development of more effective strategies for combating malaria and microbial infections.

Keywords: malaria, anti-microbials, triazene, resistance

Procedia PDF Downloads 81
429 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 66
428 Evaluation of the Photo Neutron Contamination inside and outside of Treatment Room for High Energy Elekta Synergy® Linear Accelerator

Authors: Sharib Ahmed, Mansoor Rafi, Kamran Ali Awan, Faraz Khaskhali, Amir Maqbool, Altaf Hashmi

Abstract:

Medical linear accelerators (LINAC’s) used in radiotherapy treatments produce undesired neutrons when they are operated at energies above 8 MeV, both in electron and photon configuration. Neutrons are produced by high-energy photons and electrons through electronuclear (e, n) a photonuclear giant dipole resonance (GDR) reactions. These reactions occurs when incoming photon or electron incident through the various materials of target, flattening filter, collimators, and other shielding components in LINAC’s structure. These neutrons may reach directly to the patient, or they may interact with the surrounding materials until they become thermalized. A work has been set up to study the effect of different parameter on the production of neutron around the room by photonuclear reactions induced by photons above ~8 MeV. One of the commercial available neutron detector (Ludlum Model 42-31H Neutron Detector) is used for the detection of thermal and fast neutrons (0.025 eV to approximately 12 MeV) inside and outside of the treatment room. Measurements were performed for different field sizes at 100 cm source to surface distance (SSD) of detector, at different distances from the isocenter and at the place of primary and secondary walls. Other measurements were performed at door and treatment console for the potential radiation safety concerns of the therapists who must walk in and out of the room for the treatments. Exposures have taken place from Elekta Synergy® linear accelerators for two different energies (10 MV and 18 MV) for a given 200 MU’s and dose rate of 600 MU per minute. Results indicates that neutron doses at 100 cm SSD depend on accelerator characteristics means jaw settings as jaws are made of high atomic number material so provides significant interaction of photons to produce neutrons, while doses at the place of larger distance from isocenter are strongly influenced by the treatment room geometry and backscattering from the walls cause a greater doses as compare to dose at 100 cm distance from isocenter. In the treatment room the ambient dose equivalent due to photons produced during decay of activation nuclei varies from 4.22 mSv.h−1 to 13.2 mSv.h−1 (at isocenter),6.21 mSv.h−1 to 29.2 mSv.h−1 (primary wall) and 8.73 mSv.h−1 to 37.2 mSv.h−1 (secondary wall) for 10 and 18 MV respectively. The ambient dose equivalent for neutrons at door is 5 μSv.h−1 to 2 μSv.h−1 while at treatment console room it is 2 μSv.h−1 to 0 μSv.h−1 for 10 and 18 MV respectively which shows that a 2 m thick and 5m longer concrete maze provides sufficient shielding for neutron at door as well as at treatment console for 10 and 18 MV photons.

Keywords: equivalent doses, neutron contamination, neutron detector, photon energy

Procedia PDF Downloads 433
427 Metal Binding Phage Clones in a Quest for Heavy Metal Recovery from Water

Authors: Tomasz Łęga, Marta Sosnowska, Mirosława Panasiuk, Lilit Hovhannisyan, Beata Gromadzka, Marcin Olszewski, Sabina Zoledowska, Dawid Nidzworski

Abstract:

Toxic heavy metal ion contamination of industrial wastewater has recently become a significant environmental concern in many regions of the world. Although the majority of heavy metals are naturally occurring elements found on the earth's surface, anthropogenic activities such as mining and smelting, industrial production, and agricultural use of metals and metal-containing compounds are responsible for the majority of environmental contamination and human exposure. The permissible limits (ppm) for heavy metals in food, water and soil are frequently exceeded and considered hazardous to humans, other organisms, and the environment as a whole. Human exposure to highly nickel-polluted environments causes a variety of pathologic effects. In 2008, nickel received the shameful name of “Allergen of the Year” (GILLETTE 2008). According to the dermatologist, the frequency of nickel allergy is still growing, and it can’t be explained only by fashionable piercing and nickel devices used in medicine (like coronary stents and endoprostheses). Effective remediation methods for removing heavy metal ions from soil and water are becoming increasingly important. Among others, methods such as chemical precipitation, micro- and nanofiltration, membrane separation, conventional coagulation, electrodialysis, ion exchange, reverse and forward osmosis, photocatalysis and polymer or carbon nanocomposite absorbents have all been investigated so far. The importance of environmentally sustainable industrial production processes and the conservation of dwindling natural resources has highlighted the need for affordable, innovative biosorptive materials capable of recovering specific chemical elements from dilute aqueous solutions. The use of combinatorial phage display techniques for selecting and recognizing material-binding peptides with a selective affinity for any target, particularly inorganic materials, has gained considerable interest in the development of advanced bio- or nano-materials. However, due to the limitations of phage display libraries and the biopanning process, the accuracy of molecular recognition for inorganic materials remains a challenge. This study presents the isolation, identification and characterisation of metal binding phage clones that preferentially recover nickel.

Keywords: Heavy metal recovery, cleaning water, phage display, nickel

Procedia PDF Downloads 74
426 Cardiac Pacemaker in a Patient Undergoing Breast Radiotherapy-Multidisciplinary Approach

Authors: B. Petrović, M. Petrović, L. Rutonjski, I. Djan, V. Ivanović

Abstract:

Objective: Cardiac pacemakers are very sensitive to radiotherapy treatment from two sources: electromagnetic influence from the medical linear accelerator producing ionizing radiation- influencing electronics within the pacemaker, and the absorption of dose to the device. On the other hand, patients with cardiac pacemakers at the place of a tumor are rather rare, and single clinic hardly has experience with the management of such patients. The widely accepted international guidelines for management of radiation oncology patients recommend that these patients should be closely monitored and examined before, during and after radiotherapy treatment by cardiologist, and their device and condition followed up. The number of patients having both cancer and pacemaker, is growing every year, as both cancer incidence, as well as cardiac diseases incidence, are inevitably growing figures. Materials and methods: Female patient, age 69, was diagnozed with valvular cardiomyopathy and got implanted a pacemaker in 2005 and prosthetic mitral valve in 1993 (cancer was diagnosed in 2012). She was stable cardiologically and came to radiation therapy department with the diagnosis of right breast cancer, with the tumor in upper lateral quadrant of the right breast. Since she had all lymph nodes positive (28 in total), she had to have irradiated the supraclavicular region, as well as the breast with the tumor bed. She previously received chemotherapy, approved by the cardiologist. The patient was estimated to be with the high risk as device was within the field of irradiation, and the patient had high dependence on her pacemaker. The radiation therapy plan was conducted as 3D conformal therapy. The delineated target was breast with supraclavicular region, where the pacemaker was actually placed, with the addition of a pacemaker as organ at risk, to estimate the dose to the device and its components as recommended, and the breast. The targets received both 50 Gy in 25 fractions (where 20% of a pacemaker received 50 Gy, and 60% of a device received 40 Gy). The electrode to the heart received between 1 Gy and 50 Gy. Verification of dose planned and delivered was performed. Results: Evaluation of the patient status according to the guidelines and especially evaluation of all associated risks to the patient during treatment was done. Patient was irradiated by prescribed dose and followed up for the whole year, with no symptoms of failure of the pacemaker device during, or after treatment in follow up period. The functionality of a device was estimated to be unchanged, according to the parameters (electrode impedance and battery energy). Conclusion: Patient was closely monitored according to published guidelines during irradiation and afterwards. Pacemaker irradiated with the full dose did not show any signs of failure despite recommendations data, but in correlation with other published data.

Keywords: cardiac pacemaker, breast cancer, radiotherapy treatment planning, complications of treatment

Procedia PDF Downloads 417
425 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing

Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.

Abstract:

Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.

Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target

Procedia PDF Downloads 307
424 A Design Research Methodology for Light and Stretchable Electrical Thermal Warm-Up Sportswear to Enhance the Performance of Athletes against Harsh Environment

Authors: Chenxiao Yang, Li Li

Abstract:

In this decade, the sportswear market rapidly expanded while numerous sports brands are conducting fierce competitions to hold their market shares and trying to act as a leader in professional competition sports areas to set the trends. Thus, various advancing sports equipment is being deeply explored to improving athletes’ performance in fierce competitions. Although there is plenty protective equipment such as cuff, running legging, etc., on the market, there is still blank in the field of sportswear during prerace warm-up this important time gap, especially for those competitions host in cold environment. Because there is always time gaps between warm-up and race due to event logistics or unexpected weather factors. Athletes will be exposed to chilly condition for an unpredictable long period of time. As a consequence, the effects of warm-up will be negated, and the competition performance will be degraded. However, reviewing the current market, there is none effective sports equipment provided to help athletes against this harsh environment or the rare existing products are so blocky or heavy to restrict the actions. An ideal thermal-protective sportswear should be light, flexible, comfort and aesthetic at the same time. Therefore, this design research adopted the textile circular knitting methodology to integrate soft silver-coated conductive yarns (ab. SCCYs), elastic nylon yarn and polyester yarn to develop the proposed electrical, thermal sportswear, with the strengths aforementioned. Meanwhile, the relationship between heating performance, stretch load, and energy consumption were investigated. Further, a simulation model was established to ensure providing sufficient warm and flexibility at lower energy cost and with an optimized production, parameter determined. The proposed circular knitting technology and simulation model can be directly applied to instruct prototype developments to cater different target consumers’ needs and ensure prototypes’’ safety. On the other hand, high R&D investment and time consumption can be saved. Further, two prototypes: a kneecap and an elbow guard, were developed to facilitate the transformation of research technology into an industrial application and to give a hint on the blur future blueprint.

Keywords: cold environment, silver-coated conductive yarn, electrical thermal textile, stretchable

Procedia PDF Downloads 251
423 Anti-Arthritic Effect of a Herbal Diet Formula Comprising Fruits of Rosa Multiflora and Flowers of Lonicera Japonica

Authors: Brian Chi Yan Cheng, Hui Guo, Tao Su, Xiu‐qiong Fu, Ting Li, Zhi‐ling Yu

Abstract:

Rheumatoid arthritis (RA) affects around 1% of the globe population. Yet, there is still no cure for RA. Toll-like receptor 4 (TLR4) signalling has been found to be involved in the pathogenesis of RA, making it a potential therapeutic target for RA treatment. A herbal formula (RL) consisting of fruits of Rosa Multiflora (Eijitsu rose) and flowers of Lonicera Japonica (Japanese honeysuckle) has been used in treating various inflammatory disorders for more than a thousand year. Both of them are rich sources of nutrients and bioactive phytochemicals, which can be used in producing different food products and supplements. In this study, we would evaluate the anti-arthritic effect of RL on collagen-induced arthritis (CIA) in rats and investigate the involvement of TLR4 signaling in the mode of action of RL. Anti-arthritic efficacy was evaluated using CIA rats induced by bovine type II collagen. The treatment groups were treated with RL (82.5, 165, and 330 mg/kg bw per day, p.o.) or positive control indomethacin (0.25 mg/kg bw per day, p.o.) for 35 days. Clinical signs (hind paw volume and arthritis severity scores), changes in serum inflammatory mediators, pro-/antioxidant status, histological and radiographic changes of joints were investigated. Spleens and peritoneal macrophages were used to determine the effects of RL on innate and adaptive immune responses in CIA rats. The involvement of TLR4 signalling pathways in the anti-arthritic effect of RL was examined in cartilage tissue of CIA rats, murine RAW264.7 macrophages and human THP-1 monocytic cells. The severity of arthritis in the CIA rats was significantly attenuated by RL. Antioxidant status, histological score and radiographic score were efficiently improved by RL. RL could also dose-dependently inhibit pro-inflammatory cytokines in serum of CIA rats. RL significantly inhibited the production of various pro-inflammatory mediators, the expression and/or activity of the components of TLR4 signalling pathways in animal tissue and cell lines. RL possesses anti-arthritic effect on collagen-induced arthritis in rats. The therapeutic effect of RL may be related to its inhibition on pro-inflammatory cytokines in serum. The inhibition of the TAK1/NF-κB and TAK1/MAPK pathways participate in the anti-arthritic effects of RL. This provides a pharmacological justification for the dietary use of RL in the control of various arthritic diseases. Further investigation should be done to develop RL into a anti-arthritic food products and/or supplements.

Keywords: japanese honeysuckle, rheumatoid arthritis, rosa multiflora, rosehip

Procedia PDF Downloads 416
422 Optimization Based Design of Decelerating Duct for Pumpjets

Authors: Mustafa Sengul, Enes Sahin, Sertac Arslan

Abstract:

Pumpjets are one of the marine propulsion systems frequently used in underwater vehicles nowadays. The reasons for frequent use of pumpjet as a propulsion system are that it has higher relative efficiency at high speeds, better cavitation, and acoustic performance than its rivals. Pumpjets are composed of rotor, stator, and duct, and there are two different types of pumpjet configurations depending on the desired hydrodynamic characteristic, which are with accelerating and decelerating duct. Pumpjet with an accelerating channel is used at cargo ships where it works at low speeds and high loading conditions. The working principle of this type of pumpjet is to maximize the thrust by reducing the pressure of the fluid through the channel and throwing the fluid out from the channel with high momentum. On the other hand, for decelerating ducted pumpjets, the main consideration is to prevent the occurrence of the cavitation phenomenon by increasing the pressure of the fluid about the rotor region. By postponing the cavitation, acoustic noise naturally falls down, so decelerating ducted systems are used at noise-sensitive vehicle systems where acoustic performance is vital. Therefore, duct design becomes a crucial step during pumpjet design. This study, it is aimed to optimize the duct geometry of a decelerating ducted pumpjet for a highly speed underwater vehicle by using proper optimization tools. The target output of this optimization process is to obtain a duct design that maximizes fluid pressure around the rotor region to prevent from cavitation and minimizes drag force. There are two main optimization techniques that could be utilized for this process which are parameter-based optimization and gradient-based optimization. While parameter-based algorithm offers more major changes in interested geometry, which makes user to get close desired geometry, gradient-based algorithm deals with minor local changes in geometry. In parameter-based optimization, the geometry should be parameterized first. Then, by defining upper and lower limits for these parameters, design space is created. Finally, by proper optimization code and analysis, optimum geometry is obtained from this design space. For this duct optimization study, a commercial codedparameter-based optimization algorithm is used. To parameterize the geometry, duct is represented with b-spline curves and control points. These control points have x and y coordinates limits. By regarding these limits, design space is generated.

Keywords: pumpjet, decelerating duct design, optimization, underwater vehicles, cavitation, drag minimization

Procedia PDF Downloads 180
421 Oral Microbiota as a Novel Predictive Biomarker of Response To Immune Checkpoint Inhibitors in Advanced Non-small Cell Lung Cancer Patients

Authors: Francesco Pantano, Marta Fogolari, Michele Iuliani, Sonia Simonetti, Silvia Cavaliere, Marco Russano, Fabrizio Citarella, Bruno Vincenzi, Silvia Angeletti, Giuseppe Tonini

Abstract:

Background: Although immune checkpoint inhibitors (ICIs) have changed the treatment paradigm of non–small cell lung cancer (NSCLC), these drugs fail to elicit durable responses in the majority of NSCLC patients. The gut microbiota, able to regulate immune responsiveness, is emerging as a promising, modifiable target to improve ICIs response rates. Since the oral microbiome has been demonstrated to be the primary source of bacterial microbiota in the lungs, we investigated its composition as a potential predictive biomarker to identify and select patients who could benefit from immunotherapy. Methods: Thirty-five patients with stage IV squamous and non-squamous cell NSCLC eligible for an anti-PD-1/PD-L1 as monotherapy were enrolled. Saliva samples were collected from patients prior to the start of treatment, bacterial DNA was extracted using the QIAamp® DNA Microbiome Kit (QIAGEN) and the 16S rRNA gene was sequenced on a MiSeq sequencing instrument (Illumina). Results: NSCLC patients were dichotomized as “Responders” (partial or complete response) and “Non-Responders” (progressive disease), after 12 weeks of treatment, based on RECIST criteria. A prevalence of the phylum Candidatus Saccharibacteria was found in the 10 responders compared to non-responders (abundance 5% vs 1% respectively; p-value = 1.46 x 10-7; False Discovery Rate (FDR) = 1.02 x 10-6). Moreover, a higher prevalence of Saccharibacteria Genera Incertae Sedis genus (belonging to the Candidatus Saccharibacteria phylum) was observed in "responders" (p-value = 6.01 x 10-7 and FDR = 2.46 x 10-5). Finally, the patients who benefit from immunotherapy showed a significant abundance of TM7 Phylum Sp Oral Clone FR058 strain, member of Saccharibacteria Genera Incertae Sedis genus (p-value = 6.13 x 10-7 and FDR=7.66 x 10-5). Conclusions: These preliminary results showed a significant association between oral microbiota and ICIs response in NSCLC patients. In particular, the higher prevalence of Candidatus Saccharibacteria phylum and TM7 Phylum Sp Oral Clone FR058 strain in responders suggests their potential immunomodulatory role. The study is still ongoing and updated data will be presented at the congress.

Keywords: oral microbiota, immune checkpoint inhibitors, non-small cell lung cancer, predictive biomarker

Procedia PDF Downloads 70
420 Structure and Tribological Properties of Moisture Insensitivity Si Containing Diamond-Like Carbon Film

Authors: Mingjiang Dai, Qian Shi, Fang Hu, Songsheng Lin, Huijun Hou, Chunbei Wei

Abstract:

A diamond-like carbon (DLC) is considered as a promising protective film since its high hardness and excellent tribological properties. However, DLC films are very sensitive to the environmental condition, its friction coefficient could dramatic change in high humidity, therefore, limited their further application in aerospace, the watch industry, and micro/nano-electromechanical systems. Therefore, most studies focus on the low friction coefficient of DLC films at a high humid environment. However, this is out of satisfied in practical application. An important thing was ignored is that the DLC coated components are usually used in the diversed environment, which means its friction coefficient may evidently change in different humid condition. As a result, the invalidation of DLC coated components or even sometimes disaster occurred. For example, DLC coated minisize gears were used in the watch industry, and the customer may frequently transform their locations with different weather and humidity even in one day. If friction coefficient is not stable in dry and high moisture conditions, the watch will be inaccurate. Thus, it is necessary to investigate the stable tribological behavior of DLC films in various environments. In this study, a-C:H:Si films were deposited by multi-function magnetron sputtering system, containing one ion source device and a pair of SiC dual mid-frequent targets and two direct current Ti/C targets. Hydrogenated carbon layers were manufactured by sputtering the graphite target in argon and methane gasses. The silicon was doped in DLC coatings by sputtering silicon carbide targets and the doping content were adjusted by mid-frequent sputtering current. The microstructure of the film was characterized by Raman spectrometry, X-ray photoelectron spectroscopy, and transmission electron microscopy while its friction behavior under different humidity conditions was studied using a ball-on-disc tribometer. The a-C:H films with Si content from 0 to 17at.% were obtained and the influence of Si content on the structure and tribological properties under the relative humidity of 50% and 85% were investigated. Results show that the a-C:H:Si film has typical diamond-like characteristics, in which Si mainly existed in the form of Si, SiC, and SiO2. As expected, the friction coefficient of a-C:H films can be effectively changed after Si doping, from 0.302 to 0.176 in RH 50%. The further test shows that the friction coefficient value of a-C:H:Si film in RH 85% is first increase and then decrease as a function of Si content. We found that the a-C:H:Si films with a Si content of 3.75 at.% show a stable friction coefficient of 0.13 in different humidity environment. It is suggestion that the sp3/sp2 ratio of a-C:H films with 3.75 at.% Si was higher than others, which tend to form the silica-gel-like sacrificial layers during friction tests. Therefore, the films deliver stable low friction coefficient under controlled RH value of 50 and 85%.

Keywords: diamond-like carbon, Si doping, moisture environment, table low friction coefficient

Procedia PDF Downloads 344
419 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 257
418 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 60
417 Technological Tool-Use as an Online Learner Strategy in a Synchronous Speaking Task

Authors: J. Knight, E. Barberà

Abstract:

Language learning strategies have been defined as thoughts and actions, consciously chosen and operationalized by language learners, to help them in carrying out a multiplicity of tasks from the very outset of learning to the most advanced levels of target language performance. While research in the field of Second Language Acquisition has focused on ‘good’ language learners, the effectiveness of strategy-use and orchestration by effective learners in face-to-face classrooms much less research has attended to learner strategies in online contexts, particular strategies in relation to technological tool use which can be part of a task design. In addition, much research on learner strategies and strategy use has been explored focusing on cognitive, attitudinal and metacognitive behaviour with less research focusing on the social aspect of strategies. This study focuses on how learners mediate with a technological tool designed to support synchronous spoken interaction and how this shape their spoken interaction in the opening of their talk. A case study approach is used incorporating notions from communities of practice theory to analyse and understand learner strategies of dyads carrying out a role play task. The study employs analysis of transcripts of spoken interaction in the openings of the talk along with log files of tool use. The study draws on results of previous studies pertaining to the same tool as a form of triangulation. Findings show how learners gain pre-task planning time through technological tool control. The strategies involving learners’ choices to enter and exit the tool shape their spoken interaction qualitatively, with some cases demonstrating long silences whilst others appearing to start the pedagogical task immediately. Who/what learners orientate to in the openings of the talk: an audience (i.e. the teacher), each other and/or screen-based signifiers in the opening moments of the talk also becomes a focus. The study highlights how tool use as a social practice should be considered a learning strategy in online contexts whereby different usages may be understood in the light of the more usual asynchronous social practices of the online community. The teachers’ role in the community is also problematised as the evaluator of the practices of that community. Results are pertinent for task design for synchronous speaking tasks. The use of community of practice theory supports an understanding of strategy use that involves both metacognition alongside social context revealing how tool-use strategies may need to be orally (socially) negotiated by learners and may also differ from an online language community.

Keywords: learner strategy, tool use, community of practice, speaking task

Procedia PDF Downloads 325
416 Targeting Methionine Metabolism In Gastric Cancer; Promising To Improve Chemosensetivity With Non-hetrogeneity

Authors: Nigatu Tadesse, Li Juan, Liuhong Ming

Abstract:

Gastric cancer (GC) is the fifth most common and fourth deadly cancer in the world with limited treatment options at late advanced stage in which surgical therapy is not recommended with chemotherapy remain as the mainstay of treatment. However, the occurrence of chemoresistance as well as intera-tumoral and inter-tumoral heterogeneity of response to targeted and immunotherapy underlined a clear unmet treatment need in gastroenterology. Several molecular and cellular alterations ascribed for chemo resistance in GC including cancer stem cells (CSC) and tumor microenvironment (TME) remodeling. Cancer cells including CSC bears higher metabolic demand and major changes in TME involves alterations of gut microbiota interacting with nutrients metabolism. Metabolic upregulation in lipids, carbohydrates, amino acids, fatty acids biosynthesis pathways identified as a common hall mark in GC. Metabolic addiction to methionine metabolism occurs in many cancer cells to promote the biosynthesis of S-Adenosylmethionine (SAM), a universal methyl donor molecule for high rate of transmethylation in GC and promote cell proliferation. Targeting methionine metabolism found to promotes chemo-sensitivity with treatment non-heterogeneity. Methionine restriction (MR) promoted the arrest of cell cycle at S/G2 phase and enhanced downregulation of GC cells resistance to apoptosis (including ferroptosis), which suggests the potential of synergy with chemotherapies acting at S-phase of the cell cycle as well as inducing cell apoptosis. Accumulated evidences showed both the biogenesis as well as intracellular metabolism of exogenous methionine could be safe and effective target for therapy either alone or in combination with chemotherapies. This review article provides an over view of the upregulation in methionine biosynthesis pathway and the molecular signaling through the PI3K/Akt/mTOR-c-MYC axis to promote metabolic reprograming through activating the expression of L-type aminoacid-1 (LAT1) transporter and overexpression of Methionine adenosyltransferase 2A(MAT2A) for intercellular metabolic conversion of exogenous methionine to SAM in GC, and the potential of targeting with novel therapeutic agents such as methioninase (METase), Methionine adenosyltransferase 2A (MAT2A), c-MYC, methyl like transferase 16 (METTL16) inhibitors that are currently under clinical trial development stages and future perspectives.

Keywords: gastric cancer, methionine metabolism, pi3k/akt/mtorc1-c-myc axis, gut microbiota, MAT2A, c-MYC, METTL16, methioninase

Procedia PDF Downloads 20
415 Understanding Neuronal and Glial Cell Behaviour in Multi-Layer Nanofibre Systems to Support the Development of an in vitro Model of Spinal Cord Injury and Personalised Prostheses for Repair

Authors: H. Pegram, R. Stevens, L. De Girolamo

Abstract:

Aligned electrospun nanofibres act as effective neuronal and glial cell scaffolds that can be layered to contain multiple sheets harboring different cell populations. This allows personalised biofunctional prostheses to be manufactured with both acellular and cellularised layers for the treatment of spinal cord injury. Additionally, the manufacturing route may be configured to produce in-vitro 3D cell based model of spinal cord injury to aid drug development and enhance prosthesis performance. The goal of this investigation was to optimise the multi-layer scaffold design parameters for prosthesis manufacture, to enable the development of multi-layer patient specific implant therapies. The work has also focused on the fabricating aligned nanofibre scaffolds that promote in-vitro neuronal and glial cell population growth, cell-to-cell interaction and long-term survival following trauma to mimic an in-vivo spinal cord lesion. The approach has established reproducible lesions and has identified markers of trauma and regeneration marked by effective neuronal migration across the lesion with glial support. The investigation has advanced the development of an in-vitro model of traumatic spinal cord injury and has identified a route to manufacture prostheses which target the repair spinal cord injury. Evidence collated to investigate the multi-layer concept suggests that physical cues provided by nanofibres provide both a natural extra-cellular matrix (ECM) like environment and controls cell proliferation and migration. Specifically, aligned nanofibre layers act as a guidance system for migrating and elongating neurons. On a larger scale, material type in multi-layer systems also has an influence in inter-layer migration as cell types favour different material types. Results have shown that layering nanofibre membranes create a multi-level scaffold system which can enhance or prohibit cell migration between layers. It is hypothesised that modifying nanofibre layer material permits control over neuronal/glial cell migration. Using this concept, layering of neuronal and glial cells has become possible, in the context of tissue engineering and also modelling in-vitro induced lesions.

Keywords: electrospinning, layering, lesion, modeling, nanofibre

Procedia PDF Downloads 115
414 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 160
413 Xylanase Impact beyond Performance: A Prebiotic Approach in Laying Hens

Authors: Veerle Van Hoeck, Ingrid Somers, Dany Morisset

Abstract:

Anti-nutritional factors such as non-starch polysaccharides (NSP) are present in viscous cereals used to feed poultry. Therefore, exogenous carbohydrases are commonly added to monogastric feed to degrade these NSP. Our hypothesis is that xylanase not only improves laying hen performance and digestibility but also induces a significant shift in microbial composition within the intestinal tract and, thereby, can cause a prebiotic effect. In this context, a better understanding of whether and how the chicken gut flora can be modulated by xylanase is needed. To do so, in the herein laying hen study, the effects of dietary supplementation of xylanase on performance, digestibility, and cecal microbiome were evaluated. A total of 96 HiSex laying hens was used in this experiment (3 diets and 16 replicates of 2 hens). Xylanase was added to the diets at concentrations of 0, 45,000 (15 g/t XygestTM HT) and 90,000 U/kg (30 g/t Xygest HT). The diets were based on wheat (~55 %), soybean, and sunflower meal. The lowest dosage, 45,000 U/kg, significantly increased average egg weight and improved feed efficiency compared to the control treatment (p < 0.05). Egg quality parameters were significantly improved in the experiment in response to the xylanase addition. For example, during the last 28 days of the trial, the 45,000 U/kg and the 90,000 U/kg treatments exhibited an increase in Haugh units and albumin heights (p < 0.05). Compared with the control, organic matter digestibility and N retention were drastically improved in the 45,000 U/kg treatment group, which implies better nutrient digestibility at this lowest recommended dosage compared to the control (p < 0.05). Furthermore, gross energy and crude fat digestibility were improved significantly for birds fed 90,000 U/kg group compared to the control. Importantly, 16S rRNA gene analysis revealed that xylanase at 45,000 U/kg dosages can exert a prebiotic effect. This conclusion was drawn based on studying the sequence variation in the 16S rRNA gene in order to characterize diverse microbial communities of the cecal content. A significant increase in beneficial bacteria (Lactobacilli spp and Enterococcus casseliflavus) was documented when adding 45,000 U/kg xylanase to the diet of laying hens. In conclusion, dietary supplementation of xylanase, even at the lowest dose of (45,000 U/kg), significantly improved laying hen performance and digestibility. Furthermore, it is generally accepted that a proper bacterial balance between the number of beneficial bacteria and pathogenic bacteria in the intestine is vital for the host. It seems that the xylanase enzyme is able to modulate the laying hen microbiome beneficially and thus exerts a prebiotic effect. This microbiome plasticity in response to the xylanase provides an attractive target for stimulating intestinal health.

Keywords: laying hen, prebiotic, XygestTM HT, xylanase

Procedia PDF Downloads 106
412 Envisioning The Future of Language Learning: Virtual Reality, Mobile Learning and Computer-Assisted Language Learning

Authors: Jasmin Cowin, Amany Alkhayat

Abstract:

This paper will concentrate on a comparative analysis of both the advantages and limitations of using digital learning resources (DLRs). DLRs covered will be Virtual Reality (VR), Mobile Learning (M-learning) and Computer-Assisted Language Learning (CALL) together with their subset, Mobile Assisted Language Learning (MALL) in language education. In addition, best practices for language teaching and the application of established language teaching methodologies such as Communicative Language Teaching (CLT), the audio-lingual method, or community language learning will be explored. Education has changed dramatically since the eruption of the pandemic. Traditional face-to-face education was disrupted on a global scale. The rise of distance learning brought new digital tools to the forefront, especially web conferencing tools, digital storytelling apps, test authoring tools, and VR platforms. Language educators raced to vet, learn, and implement multiple technology resources suited for language acquisition. Yet, questions remain on how to harness new technologies, digital tools, and their ubiquitous availability while using established methods and methodologies in language learning paired with best teaching practices. In M-learning language, learners employ portable computing devices such as smartphones or tablets. CALL is a language teaching approach using computers and other technologies through presenting, reinforcing, and assessing language materials to be learned or to create environments where teachers and learners can meaningfully interact. In VR, a computer-generated simulation enables learner interaction with a 3D environment via screen, smartphone, or a head mounted display. Research supports that VR for language learning is effective in terms of exploration, communication, engagement, and motivation. Students are able to relate through role play activities, interact with 3D objects and activities such as field trips. VR lends itself to group language exercises in the classroom with target language practice in an immersive, virtual environment. Students, teachers, schools, language institutes, and institutions benefit from specialized support to help them acquire second language proficiency and content knowledge that builds on their cultural and linguistic assets. Through the purposeful application of different language methodologies and teaching approaches, language learners can not only make cultural and linguistic connections in DLRs but also practice grammar drills, play memory games or flourish in authentic settings.

Keywords: language teaching methodologies, computer-assisted language learning, mobile learning, virtual reality

Procedia PDF Downloads 215
411 Effects of Global Validity of Predictive Cues upon L2 Discourse Comprehension: Evidence from Self-paced Reading

Authors: Binger Lu

Abstract:

It remains unclear whether second language (L2) speakers could use discourse context cues to predict upcoming information as native speakers do during online comprehension. Some researchers propose that L2 learners may have a reduced ability to generate predictions during discourse processing. At the same time, there is evidence that discourse-level cues are weighed more heavily in L2 processing than in L1. Previous studies showed that L1 prediction is sensitive to the global validity of predictive cues. The current study aims to explore whether and to what extent L2 learners can dynamically and strategically adjust their prediction in accord with the global validity of predictive cues in L2 discourse comprehension as native speakers do. In a self-paced reading experiment, Chinese native speakers (N=128), C-E bilinguals (N=128), and English native speakers (N=128) read high-predictable (e.g., Jimmy felt thirsty after running. He wanted to get some water from the refrigerator.) and low-predictable (e.g., Jimmy felt sick this morning. He wanted to get some water from the refrigerator.) discourses in two-sentence frames. The global validity of predictive cues was manipulated by varying the ratio of predictable (e.g., Bill stood at the door. He opened it with the key.) and unpredictable fillers (e.g., Bill stood at the door. He opened it with the card.), such that across conditions, the predictability of the final word of the fillers ranged from 100% to 0%. The dependent variable was reading time on the critical region (the target word and the following word), analyzed with linear mixed-effects models in R. C-E bilinguals showed reliable prediction across all validity conditions (β = -35.6 ms, SE = 7.74, t = -4.601, p< .001), and Chinese native speakers showed significant effect (β = -93.5 ms, SE = 7.82, t = -11.956, p< .001) in two of the four validity conditions (namely, the High-validity and MedLow conditions, where fillers ended with predictable words in 100% and 25% cases respectively), whereas English native speakers didn’t predict at all (β = -2.78 ms, SE = 7.60, t = -.365, p = .715). There was neither main effect (χ^²(3) = .256, p = .968) nor interaction (Predictability: Background: Validity, χ^²(3) = 1.229, p = .746; Predictability: Validity, χ^²(3) = 2.520, p = .472; Background: Validity, χ^²(3) = 1.281, p = .734) of Validity with speaker groups. The results suggest that prediction occurs in L2 discourse processing but to a much less extent in L1, witha significant effect in some conditions of L1 Chinese and anull effect in L1 English processing, consistent with the view that L2 speakers are more sensitive to discourse cues compared with L1 speakers. Additionally, the pattern of L1 and L2 predictive processing was not affected by the global validity of predictive cues. C-E bilinguals’ predictive processing could be partly transferred from their L1, as prior research showed that discourse information played a more significant role in L1 Chinese processing.

Keywords: bilingualism, discourse processing, global validity, prediction, self-paced reading

Procedia PDF Downloads 114
410 Barriers and Opportunities in Apprenticeship Training: How to Complete a Vocational Upper Secondary Qualification with Intermediate Finnish Language Skills

Authors: Inkeri Jaaskelainen

Abstract:

The aim of this study is to shed light on what is it like to study in apprenticeship training using intermediate (or even lower level) Finnish. The aim is to find out and describe these students' experiences and feelings while acquiring a profession in Finnish as it is important to understand how immigrant background adult learners learn and how their needs could be better taken into account. Many students choose apprenticeships and start vocational training while their language skills in Finnish are still very weak. At work, students should be able to simultaneously learn Finnish and do vocational studies in a noisy, demanding, and stressful environment. Learning and understanding new things is very challenging under these circumstances, and sometimes students get exhausted and experience a lot of stress - which makes learning even more difficult. Students are different from each other, and so are their ways to learn. Both duties at work and school assignments require reasonably good general language skills, and, especially at work, language skills are also a safety issue. The empirical target of this study is a group of students with an immigrant background who studied in various fields with intensive L2 support in 2016–2018 and who by now have completed a vocational upper secondary qualification. The interview material for this narrative study was collected from those who completed apprenticeship training in 2019–2020. The data collection methods used are a structured thematic interview, a questionnaire, and observational data. Interviewees with an immigrant background have an inconsistent cultural and educational background - some have completed an academic degree in their country of origin while others have learned to read and write only in Finland. The analysis of the material utilizes thematic analysis, which is used to examine learning and related experiences. Learning a language at work is very different from traditional classroom teaching. With evolving language skills, at an intermediate level at best, rushing and stressing makes it even more difficult to understand and increases the fear of failure. Constant noise, rapidly changing situations, and uncertainty undermine the learning and well-being of apprentices. According to preliminary results, apprenticeship training is well suited to the needs of an adult immigrant student. In apprenticeship training, students need a lot of support for learning and understanding a new communication and working culture. Stress can result in, e.g., fatigue, frustration, and difficulties in remembering and understanding. Apprenticeship training can be seen as a good path to working life. However, L2 support is a very important part of apprenticeship training, and it indeed helps students to believe that one day they will graduate and even get employed in their new country.

Keywords: apprenticeship training, vocational basic degree, Finnish learning, wee-being

Procedia PDF Downloads 115
409 Strategic Public Procurement: A Lever for Social Entrepreneurship and Innovation

Authors: B. Orser, A. Riding, Y. Li

Abstract:

To inform government about how gender gaps in SME ( small and medium-sized enterprise) contracting might be redressed, the research question was: What are the key obstacles to, and response strategies for, increasing the engagement of women business owners among SME suppliers to the government of Canada? Thirty-five interviews with senior policymakers, supplier diversity organization executives, and expert witnesses to the Canadian House of Commons, Standing Committee on Government Operations and Estimates. Qualitative data were conducted and analysed using N’Vivo 11 software. High order response categories included: (a) SME risk mitigation strategies, (b) SME procurement program design, and (c) performance measures. Primary obstacles cited were government red tape and long and complicated requests for proposals (RFPs). The majority of 'common' complaints occur when SMEs have questions about the federal procurement process. Witness responses included use of outcome-based rather than prescriptive procurement practices, more agile procurement, simplified RFPs, making payment within 30 days a procurement priority. Risk mitigation strategies included provision of procurement officers to assess risks and opportunities for businesses and development of more agile procurement procedures and processes. Recommendations to enhance program design included: improved definitional consistency of qualifiers and selection criteria, better co-ordination across agencies; clarification about how SME suppliers benefit from federal contracting; goal setting; specification of categories that are most suitable for women-owned businesses; and, increasing primary contractor awareness about the importance of subcontract relationships. Recommendations also included third-party certification of eligible firms and the need to enhance SMEs’ financial literacy to reduce financial errors. Finally, there remains the need for clear and consistent pre-program statistics to establish baselines (by sector, issuing department) performance measures, targets based on percentage of contracts granted, value of contract, percentage of target employee (women, indigenous), and community benefits including hiring local employees. The study advances strategies to enhance federal procurement programs to facilitate socio-economic policy objectives.

Keywords: procurement, small business, policy, women

Procedia PDF Downloads 96
408 MicroRNA-1246 Expression Associated with Resistance to Oncogenic BRAF Inhibitors in Mutant BRAF Melanoma Cells

Authors: Jae-Hyeon Kim, Michael Lee

Abstract:

Intrinsic and acquired resistance limits the therapeutic benefits of oncogenic BRAF inhibitors in melanoma. MicroRNAs (miRNA) regulate the expression of target mRNAs by repressing their translation. Thus, we investigated miRNA expression patterns in melanoma cell lines to identify candidate biomarkers for acquired resistance to BRAF inhibitor. Here, we used Affymetrix miRNA V3.0 microarray profiling platform to compare miRNA expression levels in three cell lines containing BRAF inhibitor-sensitive A375P BRAF V600E cells, their BRAF inhibitor-resistant counterparts (A375P/Mdr), and SK-MEL-2 BRAF-WT cells with intrinsic resistance to BRAF inhibitor. The miRNAs with at least a two-fold change in expression between BRAF inhibitor-sensitive and –resistant cell lines, were identified as differentially expressed. Averaged intensity measurements identified 138 and 217 miRNAs that were differentially expressed by 2 fold or more between: 1) A375P and A375P/Mdr; 2) A375P and SK-MEL-2, respectively. The hierarchical clustering revealed differences in miRNA expression profiles between BRAF inhibitor-sensitive and –resistant cell lines for miRNAs involved in intrinsic and acquired resistance to BRAF inhibitor. In particular, 43 miRNAs were identified whose expression was consistently altered in two BRAF inhibitor-resistant cell lines, regardless of intrinsic and acquired resistance. Twenty five miRNAs were consistently upregulated and 18 downregulated more than 2-fold. Although some discrepancies were detected when miRNA microarray data were compared with qPCR-measured expression levels, qRT-PCR for five miRNAs (miR-3617, miR-92a1, miR-1246, miR-1936-3p, and miR-17-3p) results showed excellent agreement with microarray experiments. To further investigate cellular functions of miRNAs, we examined effects on cell proliferation. Synthetic oligonucleotide miRNA mimics were transfected into three cell lines, and proliferation was quantified using a colorimetric assay. Of the 5 miRNAs tested, only miR-1246 altered cell proliferation of A375P/Mdr cells. The transfection of miR-1246 mimic strongly conferred PLX-4720 resistance to A375P/Mdr cells, implying that miR-1246 upregulation confers acquired resistance to BRAF inhibition. We also found that PLX-4720 caused much greater G2/M arrest in A375P/Mdr cells transfected with miR-1246mimic than that seen in scrambled RNA-transfected cells. Additionally, miR-1246 mimic partially caused a resistance to autophagy induction by PLX-4720. These results indicate that autophagy does play an essential death-promoting role inPLX-4720-induced cell death. Taken together, these results suggest that miRNA expression profiling in melanoma cells can provide valuable information for a network of BRAF inhibitor resistance-associated miRNAs.

Keywords: microRNA, BRAF inhibitor, drug resistance, autophagy

Procedia PDF Downloads 302
407 Tracing Sources of Sediment in an Arid River, Southern Iran

Authors: Hesam Gholami

Abstract:

Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.

Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran

Procedia PDF Downloads 54
406 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation

Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís

Abstract:

Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.

Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser

Procedia PDF Downloads 176
405 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey

Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur

Abstract:

Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.

Keywords: lead analyses, red pepper, risk assessment, daily exposure

Procedia PDF Downloads 148
404 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 72
403 A Preliminary Study on the Effects of Lung Impact on Ballistic Thoracic Trauma

Authors: Amy Pullen, Samantha Rodrigues, David Kieser, Brian Shaw

Abstract:

The aim of the study was to determine if a projectile interacting with the lungs increases the severity of injury in comparison to a projectile interacting with the ribs or intercostal muscle. This comparative study employed a 10% gelatine based model with either porcine ribs or balloons embedded to represent a lung. Four sample groups containing five samples were evaluated; these were control (plain gel), intercostal impact, rib impact, and lung impact. Two ammunition natures were evaluated at a range of 10m; these were 5.56x45mm and 7.62x51mm. Aspects of projectile behavior were quantified including exiting projectile weight, location of yawing, projectile fragmentation and distribution, location and area of the temporary cavity, permanent cavity formation, and overall energy deposition. Major findings included the cavity showing a higher percentage of the projectile weight exit the block than the intercostal and ribs, but similar to the control for the 5.56mm ammunition. However, for the 7.62mm ammunition, the lung was shown to have a higher percentage of the projectile weight exit the block than the control, intercostal and ribs. The total weight of projectile fragments as a function of penetration depth revealed large fluctuations and significant intra-group variation for both ammunition natures. Despite the lack of a clear trend, both plots show that the lung leads to greater projectile fragments exiting the model. The lung was shown to have a later center of the temporary cavity than the control, intercostal and ribs for both ammunition types. It was also shown to have a similar temporary cavity volume to the control, intercostal and ribs for the 5.56mm ammunition and a similar temporary cavity to the intercostal for the 7.62mm ammunition The lung was shown to leave a similar projectile tract than the control, intercostal and ribs for both ammunition types. It was also shown to have larger shear planes than the control and the intercostal, but similar to the ribs for the 5.56mm ammunition, whereas it was shown to have smaller shear planes than the control but similar shear planes to the intercostal and ribs for the 7.62mm ammunition. The lung was shown to have less energy deposited than the control, intercostal and ribs for both ammunition types. This comparative study provides insights into the influence of the lungs on thoracic gunshot trauma. It indicates that the lungs limits projectile deformation and causes a later onset of yawing and subsequently limits the energy deposited along the wound tract creating a deeper and smaller cavity. This suggests that lung impact creates an altered pattern of local energy deposition within the target which will affect the severity of trauma.

Keywords: ballistics, lung, trauma, wounding

Procedia PDF Downloads 148