Search results for: post-translational modifications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 486

Search results for: post-translational modifications

126 An Exploratory Case Study of the Transference of Skills and Dispositions Used by a Newly Qualified Teacher

Authors: Lynn Machin

Abstract:

Using the lens of a theoretical framework relating to learning to learn the intention of the case study was to explore how transferable the teaching and learning skills of a newly qualified teacher (post-compulsory education) were when used in an overseas, unfamiliar and challenging post-compulsory educational environment. Particularly, the research sought to explore how this newly qualified teacher made use of the skills developed during their teacher training and to ascertain if, and what, other skills were necessary in order for them to have a positive influence on their learners and for them to be able to thrive within a different country and learning milieu. This case study looks at the experience of a trainee teacher who recently qualified in the UK to teach in post compulsory education (i.e. post 16 education). Rather than gaining employment in a UK based academy or college of further education this newly qualified teacher secured her first employment as a teacher in a province in China. Moreover, the newly qualified teacher had limited travel experience and had never travelled to Asia. She was one of the quieter and more reserved members on the one year teacher training course and was the least likely of the group to have made the decision to work abroad. How transferable the pedagogical skills that she had gained during her training would be when used in a culturally different and therefore (to her, challenging) environment was a key focus of the study. Another key focus was to explore the dispositions being used by the newly qualified teacher in order for her to teach and to thrive in an overseas educational environment. The methodological approach used for this study was both interpretative and qualitative. Associated methods were: Observation: observing the wider and operational practice of the newly qualified teacher over a five day period, and their need, ability and willingness to be reflective, resilient, reciprocal and resourceful. Interview: semi-structured interview with the newly qualified teacher following the observation of her practice. Findings from this case study illuminate the modifications made by the newly qualified teacher to her bank of teaching and learning strategies as well as the essentiality of dispositions used by her to know how to learn and also, crucially, to be ready and willing to do so. Such dispositions include being resilient, resourceful, reciprocal and reflective; necessary in order to adapt to the emerging challenges encountered by the teacher during their first months of employment in China. It is concluded that developing the skills to teach is essential for good teaching and learning practices. Having dispositions that enable teachers to work in ever changing conditions and surroundings is, this paper argues, essential for transferability and longevity of use of these skills.

Keywords: learning, post-compulsory, resilience, transferable

Procedia PDF Downloads 261
125 The Implementation of Inclusive Education in Collaboration between Teachers of Special Education Classes and Regular Classes in a Preschool

Authors: Chiou-Shiue Ko

Abstract:

As is explicitly stipulated in Article 7 of the Enforcement Rules of the Special Education Act as amended in 1998, "in principle, children with disabilities should be integrated with normal children for preschool education". Since then, all cities and counties have been committed to promoting preschool inclusive education. The Education Department, New Taipei City Government, has been actively recruiting advisory groups of professors to assist in the implementation of inclusive education in preschools since 2001. Since 2011, the author of this study has been guiding Preschool Rainbow to implement inclusive education. Through field observations, meetings, and teaching demonstration seminars, this study explored the process of how inclusive education has been successfully implemented in collaboration with teachers of special education classes and regular classes in Preschool Rainbow. The implementation phases for inclusive education in a single academic year include the following: 1) Preparatory stage. Prior to implementation, teachers in special education and regular classes discuss ways of conducting inclusive education and organize reading clubs to read books related to curriculum modifications that integrate the eight education strategies, early treatment and education, and early childhood education programs to enhance their capacity to implement and compose teaching plans for inclusive education. In addition to the general objectives of inclusive education, the objective of inclusive education for special children is also embedded into the Individualized Education Program (IEP). 2) Implementation stage. Initially, a promotional program for special education is implemented for the children to allow all the children in the preschool to understand their own special qualities and those of special children. After the implementation of three weeks of reverse inclusion, the children in the special education classes are put into groups and enter the regular classes twice a week to implement adjustments to their inclusion in the learning area and the curriculum. In 2013, further cooperation was carried out with adjacent hospitals to perform development screening activities for the early detection of children with developmental delays. 3) Review and reflection stage. After the implementation of inclusive education, all teachers in the preschool are divided into two groups to record their teaching plans and the lessons learned during implementation. The effectiveness of implementing the objective of inclusive education is also reviewed. With the collaboration of all teachers, in 2015, Preschool Rainbow won New Taipei City’s “Preschool Light” award as an exceptional model for inclusive education. Its model of implementing inclusive education can be used as a reference for other preschools.

Keywords: collaboration, inclusive education, preschool, teachers, special education classes, regular classes

Procedia PDF Downloads 397
124 Molecular Farming: Plants Producing Vaccine and Diagnostic Reagent

Authors: Katerina H. Takova, Ivan N. Minkov, Gergana G. Zahmanova

Abstract:

Molecular farming is the production of recombinant proteins in plants with the aim to use the protein as a purified product, crude extract or directly in the planta. Plants gain more attention as expression systems compared to other ones due to the cost effective production of pharmaceutically important proteins, appropriate post-translational modifications, assembly of complex proteins, absence of human pathogens to name a few. In addition, transient expression in plant leaves enables production of recombinant proteins within few weeks. Hepatitis E virus (HEV) is a causative agent of acute hepatitis. HEV causes epidemics in developing countries and is primarily transmitted through the fecal-oral route. Presently, all efforts for development of Hepatitis E vaccine are focused on the Open Read Frame 2 (ORF2) capsid protein as it contains epitopes that can induce neutralizing antibodies. For our purpose, we used the CMPV-based vector-pEAQ-HT for transient expression of HEV ORF2 in Nicotiana benthamina. Different molecular analysis (Western blot and ELISA) showed that HEV ORF2 capsid protein was expressed in plant tissue in high-yield up to 1g/kg of fresh leaf tissue. Electron microscopy showed that the capsid protein spontaneously assembled in low abundance virus-like particles (VLPs), which are highly immunogenic structures and suitable for vaccine development. The expressed protein was recognized by both human and swine HEV positive sera and can be used as a diagnostic reagent for the detection of HEV infection. Production of HEV capsid protein in plants is a promising technology for further HEV vaccine investigations. Here, we reported for a rapid high-yield transient expression of a recombinant protein in plants suitable for vaccine production as well as a diagnostic reagent. Acknowledgments -The authors’ research on HEV is supported with grants from the Project PlantaSYST under the Widening Program, H2020 as well as under the UK Biotechnological and Biological Sciences Research Council (BBSRC) Institute Strategic Programme Grant ‘Understanding and Exploiting Plant and Microbial Secondary Metabolism’ (BB/J004596/1). The authors want to thank Prof. George Lomonossoff (JIC, Norwich, UK) for his contribution.

Keywords: hepatitis E virus, plant molecular farming, transient expression, vaccines

Procedia PDF Downloads 127
123 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine

Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski

Abstract:

The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine

Procedia PDF Downloads 180
122 Safety Evaluation of Post-Consumer Recycled PET Materials in Chilean Industry by Overall Migration Tests

Authors: Evelyn Ilabaca, Ximena Valenzuela, Alejandra Torres, María José Galotto, Abel Guarda

Abstract:

One of the biggest problems in food packaging industry, especially with the plastic materials, is the fact that these materials are usually obtained from non-renewable resources and also remain as waste after its use, causing environmental issues. This is an international concern and particular attention is given to reduction, reuse and recycling strategies for decreasing the waste from plastic packaging industry. In general, polyethylenes represent most plastic waste and recycling process of post-consumer polyethylene terephthalate (PCR-PET) has been studied. US Food and Drug Administration (FDA), European Food Safety Authority (EFSA) and Southern Common Market (MERCOSUR) have generated different legislative documents to control the use of PCR-PET in the production of plastic packaging intended direct food contact in order to ensure the capacity of recycling process to remove possible contaminants that can migrate into food. Consequently, it is necessary to demonstrate by challenge test that the recycling process is able to remove specific contaminants, obtaining a safe recycled plastic to human health. These documents establish that the concentration limit for substitute contaminants in PET is 220 ppb (ug/kg) and the specific migration limit is 10 ppb (ug/kg) for each contaminant, in addition to assure the sensorial characteristics of food are not affected. Moreover, under the Commission Regulation (EU) N°10/2011 on plastic materials and articles intended to come into contact with food, it is established that overall migration limit is 10 mg of substances per 1 dm2 of surface area of the plastic material. Thus, the aim of this work is to determine the safety of PCR-PET-containing food packaging materials in Chile by measuring their overall migration, and their comparison with the established limits at international level. This information will serve as a basis to provide a regulation to control and regulate the use of recycled plastic materials in the manufacture of plastic packaging intended to be in direct contact with food. The methodology used involves a procedure according to EN-1186:2002 with some modifications. The food simulants used were ethanol 10 % (v/v) and acetic acid 3 % (v/v) as aqueous food simulants, and ethanol 95 % (v/v) and isooctane as substitutes of fatty food simulants. In this study, preliminary results showed that Chilean food packaging plastics with different PCR-PET percentages agree with the European Legislation for food aqueous character.

Keywords: contaminants, polyethylene terephthalate, plastic food packaging, recycling

Procedia PDF Downloads 248
121 Morphological and Chemical Characterization of the Surface of Orthopedic Implant Materials

Authors: Bertalan Jillek, Péter Szabó, Judit Kopniczky, István Szabó, Balázs Patczai, Kinga Turzó

Abstract:

Hip and knee prostheses are one of the most frequently used medical implants, that can significantly improve patients’ quality of life. Long term success and biointegration of these prostheses depend on several factors, like bulk and surface characteristics, construction and biocompatibility of the material. The applied surgical technique, the general health condition and life-quality of the patient are also determinant factors. Medical devices used in orthopedic surgeries have different surfaces depending on their function inside the human body. Surface roughness of these implants determines the interaction with the surrounding tissues. Numerous modifications have been applied in the recent decades to improve a specific property of an implant. Our goal was to compare the surface characteristics of typical implant materials used in orthopedic surgery and traumatology. Morphological and chemical structure of Vortex plate anodized titanium, cemented THR (total hip replacement) stem high nitrogen REX steel (SS), uncemented THR stem and cup titanium (Ti) alloy with titanium plasma spray coating (TPS), cemented cup and uncemented acetabular liner HXL and UHMWPE and TKR (total knee replacement) femoral component CoCrMo alloy (Sanatmetal Ltd, Hungary) discs were examined. Visualization and elemental analysis were made by scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS). Surface roughness was determined by atomic force microscopy (AFM) and profilometry. SEM and AFM revealed the morphological and roughness features of the examined materials. TPS Ti presented the highest Ra value (25 ± 2 μm, followed by CoCrMo alloy (535 ± 19 nm), Ti (227 ± 15 nm) and stainless steel (170 ± 11 nm). The roughness of the HXL and UHMWPE surfaces was in the same range, 147 ± 13 nm and 144 ± 15 nm, respectively. EDS confirmed typical elements on the investigated prosthesis materials: Vortex plate Ti (Ti, O, P); TPS Ti (Ti, O, Al); SS (Fe, Cr, Ni, C) CoCrMo (Co, Cr, Mo), HXL (C, Al, Ni) and UHMWPE (C, Al). The results indicate that the surface of prosthesis materials have significantly different features and the applied investigation methods are suitable for their characterization. Contact angle measurements and in vitro cell culture testing are further planned to test their surface energy characteristics and biocompatibility.

Keywords: morphology, PE, roughness, titanium

Procedia PDF Downloads 98
120 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 90
119 Teicoplanin Derivatives with Antiviral Activity: Synthesis and Biological Evaluation

Authors: Zsolt Szucs, Viktor Kelemen, Son Le Thai, Magdolna Csavas, Erzsebet Roth, Gyula Batta, Annelies Stevaert, Evelien Vanderlinden, Aniko Borbas, Lieve Naesens, Pal Herczegh

Abstract:

The approval of modern glycopeptide antibiotics such as dalbavancin and oritavancin which have excellent activity against Gram-positive bacteria, encouraged our research group to prepare semisynthetic compounds from several members of glycopeptides by various chemical methods. Derivatives from the aglycone of ristocetin, eremomycin, vancomycin and a pseudoaglycon of teicoplanin have been synthesized in a systematic manner. Interestingly, some of the aglycoristocetin derivatives displayed noteworthy anti-influenza activity. More recently our group has been focusing on the modifications of one of the pseudoaglycons of teicoplanin. The reaction of N-ethoxycarbonyl maleimide derivatives with the primary amino function, the copper-catalysed azide-alkyne click reaction and the sulfonylation of the N-terminus were utilized to obtain systematic series of compounds. All substituents provide a more lipophilic character to the new molecules compared to the parent antibiotics, which is known to be favourable for activity against resistant bacteria. Lipoglycopeptides are also known to have antiviral properties, which has been predominantly studied on HIV by others. The structure-activity relationship study of our compounds revealed the influence of a few structural elements on biological activity. In many cases, minimal changes in lipophilicity and structure produced great differences in efficacy and cytotoxicity. In vitro experiments showed that these compounds are not only active against glycopeptide resistant Gram-positive bacteria but in several cases they prevent the infection of cell cultures by different strains of influenza viruses. This is probably related to the inhibition of the viral entry into the host cell nucleus, of which the exact mechanism is unknown. In some instances, reasonably low concentrations were sufficient to observe this effect. Several derivatives were highly cytotoxic at the same time, but some of them displayed a good selectivity index. The antiviral properties of the compounds are not restricted to influenza viruses e.g., some of them showed good activity against Human Coronavirus 229E. This work could potentially lead to the development of antiviral drugs which possess the crucial structural motifs that are needed for antiviral activity, while missing those which contribute to the antibacterial effect.

Keywords: antiviral, glycopeptide, semisynthetic, teicoplanin

Procedia PDF Downloads 127
118 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves

Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis

Abstract:

Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.

Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities

Procedia PDF Downloads 241
117 Automated, Objective Assessment of Pilot Performance in Simulated Environment

Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt

Abstract:

Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).

Keywords: automated assessment, flight simulator, human factors, pilot training

Procedia PDF Downloads 126
116 Quantum Chemical Investigation of Hydrogen Isotopes Adsorption on Metal Ion Functionalized Linde Type A and Faujasite Type Zeolites

Authors: Gayathri Devi V, Aravamudan Kannan, Amit Sircar

Abstract:

In the inner fuel cycle system of a nuclear fusion reactor, the Hydrogen Isotopes Removal System (HIRS) plays a pivoted role. It enables the effective extraction of the hydrogen isotopes from the breeder purge gas which helps to maintain the tritium breeding ratio and sustain the fusion reaction. One of the components of HIRS, Cryogenic Molecular Sieve Bed (CMSB) columns with zeolites adsorbents are considered for the physisorption of hydrogen isotopes at 1 bar and 77 K. Even though zeolites have good thermal stability and reduced activation properties making them ideal for use in nuclear reactor applications, their modest capacity for hydrogen isotopes adsorption is a cause of concern. In order to enhance the adsorbent capacity in an informed manner, it is helpful to understand the adsorption phenomena at the quantum electronic structure level. Physicochemical modifications of the adsorbent material enhances the adsorption capacity through the incorporation of active sites. This may be accomplished through the incorporation of suitable metal ions in the zeolite framework. In this work, molecular hydrogen isotopes adsorption on the active sites of functionalized zeolites are investigated in detail using Density Functional Theory (DFT) study. This involves the utilization of hybrid Generalized Gradient Approximation (GGA) with dispersion correction to account for the exchange and correlation functional of DFT. The electronic energies, adsorption enthalpy, adsorption free energy, Highest Occupied Molecular Orbital (HOMO), Lowest Unoccupied Molecular Orbital (LUMO) energies are computed on the stable 8T zeolite clusters as well as the periodic structure functionalized with different active sites. The characteristics of the dihydrogen bond with the active metal sites and the isotopic effects are also studied in detail. Validation studies with DFT will also be presented for adsorption of hydrogen on metal ion functionalized zeolites. The ab-inito screening analysis gave insights regarding the mechanism of hydrogen interaction with the zeolites under study and also the effect of the metal ion on adsorption. This detailed study provides guidelines for selection of the appropriate metal ions that may be incorporated in the zeolites framework for effective adsorption of hydrogen isotopes in the HIRS.

Keywords: adsorption enthalpy, functionalized zeolites, hydrogen isotopes, nuclear fusion, physisorption

Procedia PDF Downloads 156
115 Lexico-semantic and Morphosyntactic Analyses of Student-generated Paraphrased Academic Texts

Authors: Hazel P. Atilano

Abstract:

In this age of AI-assisted teaching and learning, there seems to be a dearth of research literature on the linguistic analysis of English as a Second Language (ESL) student-generated paraphrased academic texts. This study sought to examine the lexico-semantic, morphosyntactic features of paraphrased academic texts generated by ESL students. Employing a descriptive qualitative design, specifically linguistic analysis, the study involved a total of 85 students from senior high school, college, and graduate school enrolled in research courses. Data collection consisted of a 60-minute real-time, on-site paraphrasing practice exercise using excerpts from discipline-specific literature reviews of 150 to 200 words. A focus group discussion (FGD) was conducted to probe into the challenges experienced by the participants. The writing exercise yielded a total of 516 paraphrase pairs. A total of 176 paraphrase units (PUs) and 340 non-paraphrase pairs (NPPs) were detected. Findings from the linguistic analysis of PUs reveal that the modifications made to the original texts are predominantly syntax-based (Diathesis Alterations and Coordination Changes) and a combination of Miscellaneous Changes (Change of Order, Change of Format, and Addition/Deletion). Results of the analysis of paraphrase extremes (PE) show that Identical Structures resulting from the use of synonymous substitutions, with no significant change in the structural features of the original, is the most frequently occurring instance of PE. The analysis of paraphrase errors reveals that synonymous substitutions resulting in identical structures are the most frequently occurring error that leads to PE. Another type of paraphrasing error involves semantic and content loss resulting from the deletion or addition of meaning-altering content. Three major themes emerged from the FGD: (1) The Challenge of Preserving Semantic Content and Fidelity; (2) The Best Words in the Best Order: Grappling with the Lexico-semantic and Morphosyntactic Demands of Paraphrasing; and (3) Contending with Limited Vocabulary, Poor Comprehension, and Lack of Practice. A pedagogical paradigm was designed based on the major findings of the study for a sustainable instructional intervention.

Keywords: academic text, lexico-semantic analysis, linguistic analysis, morphosyntactic analysis, paraphrasing

Procedia PDF Downloads 36
114 Analysis of Lift Force in Hydrodynamic Transport of a Finite Sized Particle in Inertial Microfluidics with a Rectangular Microchannel

Authors: Xinghui Wu, Chun Yang

Abstract:

Inertial microfluidics is a competitive fluidic method with applications in separation of particles, cells and bacteria. In contrast to traditional microfluidic devices with low Reynolds number, inertial microfluidics works in the intermediate Re number range which brings about several intriguing inertial effects on particle separation/focusing to meet the throughput requirement in the real-world. Geometric modifications to make channels become irregular shapes can leverage fluid inertia to create complex secondary flow for adjusting the particle equilibrium positions and thus enhance the separation resolution and throughput. Although inertial microfluidics has been extensively studied by experiments, our current understanding of its mechanisms is poor, making it extremely difficult to build rational-design guidelines for the particle focusing locations, especially for irregularly shaped microfluidic channels. Inertial particle microfluidics in irregularly shaped channels were investigated in our group. There are several fundamental issues that require us to address. One of them is about the balance between the inertial lift forces and the secondary drag forces. Also, it is critical to quantitatively describe the dependence of the life forces on particle-particle interactions in irregularly shaped channels, such as a rectangular one. To provide physical insights into the inertial microfluidics in channels of irregular shapes, in this work the immersed boundary-lattice Boltzmann method (IB-LBM) was introduced and validated to explore the transport characteristics and the underlying mechanisms of an inertial focusing single particle in a rectangular microchannel. The transport dynamics of a finitesized particle were investigated over wide ranges of Reynolds number (20 < Re < 500) and particle size. The results show that the inner equilibrium positions are more difficult to occur in the rectangular channel, which can be explained by the secondary flow caused by the presence of a finite-sized particle. Furthermore, force decoupling analysis was utilized to study the effect of each type of lift force on the inertia migration, and a theoretical model for the lateral lift force of a finite-sized particle in the rectangular channel was established. Such theoretical model can be used to provide theoretical guidance for the design and operation of inertial microfluidics.

Keywords: inertial microfluidics, particle focuse, life force, IB-LBM

Procedia PDF Downloads 44
113 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19

Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello

Abstract:

Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.

Keywords: facial action coding system, COVID-19, masks, facial analysis

Procedia PDF Downloads 50
112 Assessment of OTA Contamination in Rice from Fungal Growth Alterations in a Scenario of Climate Changes

Authors: Carolina S. Monteiro, Eugénia Pinto, Miguel A. Faria, Sara C. Cunha

Abstract:

Rice (Oryza sativa) production plays a vital role in reducing hunger and poverty and assumes particular importance in low-income and developing countries. Rice is a sensitive plant, and production occurs strictly where suitable temperature and water conditions are found. Climatic changes are likely to affect worldwide, and some models have predicted increased temperatures, variations in atmospheric CO₂ concentrations and modification in precipitation patterns. Therefore, the ongoing climatic changes threaten rice production by increasing biotic and abiotic stress factors, and crops will grow in different environmental conditions in the following years. Around the world, the effects will be regional and can be detrimental or advantageous depending on the region. Mediterranean zones have been identified as possible hot spots, where dramatic temperature changes, modifications of CO₂ levels, and rainfall patterns are predicted. The actual estimated atmospheric CO₂ concentration is around 400 ppm, and it is predicted that it can reach up to 1000–1200 ppm, which can lead to a temperature increase of 2–4 °C. Alongside, rainfall patterns are also expected to change, with more extreme wet/dry episodes taking place. As a result, it could increase the migration of pathogens, and a shift in the occurrence of mycotoxins, concerning their types and concentrations, is expected. Mycotoxigenic spoilage fungi can colonize the crops and be present in all rice food chain supplies, especially Penicillium species, mainly resulting in ochratoxin A (OTA) contamination. In this scenario, the objectives of the present study are evaluating the effect of temperature (20 vs. 25 °C), CO₂ (400 vs. 1000 ppm), and water stress (0.93 vs 0.95 water activity) on growth and OTA production by a Penicillium nordicum strain in vitro on rice-based media and when colonizing layers of raw rice. Results demonstrate the effect of temperature, CO₂ and drought on the OTA production in a rice-based environment, thus contributing to the development of mycotoxins predictive models in climate change scenarios. As a result, improving mycotoxins' surveillance and monitoring systems, whose occurrence can be more frequent due to climatic changes, seems relevant and necessary. The development of prediction models for hazard contaminants presents in foods highly sensitive to climatic changes, such as mycotoxins, in the highly probable new agricultural scenarios is of paramount importance.

Keywords: climate changes, ochratoxin A, penicillium, rice

Procedia PDF Downloads 37
111 Measuring the Embodied Energy of Construction Materials and Their Associated Cost Through Building Information Modelling

Authors: Ahmad Odeh, Ahmad Jrade

Abstract:

Energy assessment is an evidently significant factor when evaluating the sustainability of structures especially at the early design stage. Today design practices revolve around the selection of material that reduces the operational energy and yet meets their displinary need. Operational energy represents a substantial part of the building lifecycle energy usage but the fact remains that embodied energy is an important aspect unaccounted for in the carbon footprint. At the moment, little or no consideration is given to embodied energy mainly due to the complexity of calculation and the various factors involved. The equipment used, the fuel needed, and electricity required for each material vary with location and thus the embodied energy will differ for each project. Moreover, the method and the technique used in manufacturing, transporting and putting in place will have a significant influence on the materials’ embodied energy. This anomaly has made it difficult to calculate or even bench mark the usage of such energies. This paper presents a model aimed at helping designers select the construction materials based on their embodied energy. Moreover, this paper presents a systematic approach that uses an efficient method of calculation and ultimately provides new insight into construction material selection. The model is developed in a BIM environment targeting the quantification of embodied energy for construction materials through the three main stages of their life: manufacturing, transportation and placement. The model contains three major databases each of which contains a set of the most commonly used construction materials. The first dataset holds information about the energy required to manufacture any type of materials, the second includes information about the energy required for transporting the materials while the third stores information about the energy required by tools and cranes needed to place an item in its intended location. The model provides designers with sets of all available construction materials and their associated embodied energies to use for the selection during the design process. Through geospatial data and dimensional material analysis, the model will also be able to automatically calculate the distance between the factories and the construction site. To remain within the sustainability criteria set by LEED, a final database is created and used to calculate the overall construction cost based on R.M.S. means cost data and then automatically recalculate the costs for any modifications. Design criteria including both operational and embodied energies will cause designers to revaluate the current material selection for cost, energy, and most importantly sustainability.

Keywords: building information modelling, energy, life cycle analysis, sustainablity

Procedia PDF Downloads 246
110 Polyphenol-Rich Aronia Melanocarpa Juice Consumption and Line-1 Dna Methylation in a Cohort at Cardiovascular Risk

Authors: Ljiljana Stojković, Manja Zec, Maja Zivkovic, Maja Bundalo, Marija Glibetić, Dragan Alavantić, Aleksandra Stankovic

Abstract:

Cardiovascular disease (CVD) is associated with alterations in DNA methylation, the latter modulated by dietary polyphenols. The present pilot study (part of the original clinical study registered as NCT02800967 at www.clinicaltrials.gov) aimed to investigate the impact of 4-week daily consumption of polyphenol-rich Aronia melanocarpa juice on Long Interspersed Nucleotide Element-1 (LINE-1) methylation in peripheral blood leukocytes, in subjects (n=34, age of 41.1±6.6 years) at moderate CVD risk, including an increased body mass index, central obesity, high normal blood pressure and/or dyslipidemia. The goal was also to examine whether factors known to affect DNA methylation, such as folate intake levels, MTHFR C677T gene variant, as well as the anthropometric and metabolic parameters, modulated the LINE-1 methylation levels upon consumption of polyphenol-rich Aronia juice. The experimental analysis of LINE-1 methylation was done by the MethyLight method. MTHFR C677T genotypes were determined by the polymerase chain reaction-restriction fragment length polymorphism method. Folate intake was assessed by processing the data from the food frequency questionnaire and repeated 24-hour dietary recalls. Serum lipid profile was determined by using Roche Diagnostics kits. The statistical analyses were performed using the Statistica software package. In women, after vs. before the treatment period, a significant decrease in LINE-1 methylation levels was observed (97.54±1.50% vs. 98.39±0.86%, respectively; P=0.01). The change (after vs. before treatment) in LINE-1 methylation correlated directly with MTHFR 677T allele presence, average daily folate intake and the change in serum low-density lipoprotein cholesterol, while inversely with the change in serum triacylglycerols (R=0.72, R2=0.52, adjusted R2=0.36, P=0.03). The current results imply potential cardioprotective effects of habitual polyphenol-rich Aronia juice consumption achieved through the modifications of DNA methylation pattern in subjects at CVD risk, which should be further confirmed. Hence, the precision nutrition-driven modulations of DNA methylation may become targets for new approaches in the prevention and treatment of CVD.

Keywords: Aronia melanocarpa, cardiovascular risk, LINE-1, methylation, peripheral blood leukocytes, polyphenol

Procedia PDF Downloads 173
109 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 306
108 Modification of Carbon-Based Gas Sensors for Boosting Selectivity

Authors: D. Zhao, Y. Wang, G. Chen

Abstract:

Gas sensors that utilize carbonaceous materials as sensing media offer numerous advantages, making them the preferred choice for constructing chemical sensors over those using other sensing materials. Carbonaceous materials, particularly nano-sized ones like carbon nanotubes (CNTs), provide these sensors with high sensitivity. Additionally, carbon-based sensors possess other advantageous properties that enhance their performance, including high stability, low power consumption for operation, and cost-effectiveness in their construction. These properties make carbon-based sensors ideal for a wide range of applications, especially in miniaturized devices created through MEMS or NEMS technologies. To capitalize on these properties, a group of chemoresistance-type carbon-based gas sensors was developed and tested against various volatile organic compounds (VOCs) and volatile inorganic compounds (VICs). The results demonstrated exceptional sensitivity to both VOCs and VICs, along with the sensor’s long-term stability. However, this broad sensitivity also led to poor selectivity towards specific gases. This project aims at addressing the selectivity issue by modifying the carbon-based sensing materials and enhancing the sensor's specificity to individual gas. Multiple groups of sensors were manufactured and modified using proprietary techniques. To assess their performance, we conducted experiments on representative sensors from each group to detect a range of VOCs and VICs. The VOCs tested included acetone, dimethyl ether, ethanol, formaldehyde, methane, and propane. The VICs comprised carbon monoxide (CO), carbon dioxide (CO2), hydrogen (H2), nitric oxide (NO), and nitrogen dioxide (NO2). The concentrations of the sample gases were all set at 50 parts per million (ppm). Nitrogen (N2) was used as the carrier gas throughout the experiments. The results of the gas sensing experiments are as follows. In Group 1, the sensors exhibited selectivity toward CO2, acetone, NO, and NO2, with NO2 showing the highest response. Group 2 primarily responded to NO2. Group 3 displayed responses to nitrogen oxides, i.e., both NO and NO2, with NO2 slightly surpassing NO in sensitivity. Group 4 demonstrated the highest sensitivity among all the groups toward NO and NO2, with NO2 being more sensitive than NO. In conclusion, by incorporating several modifications using carbon nanotubes (CNTs), sensors can be designed to respond well to NOx gases with great selectivity and without interference from other gases. Because the response levels to NO and NO2 from each group are different, the individual concentration of NO and NO2 can be deduced.

Keywords: gas sensors, carbon, CNT, MEMS/NEMS, VOC, VIC, high selectivity, modification of sensing materials

Procedia PDF Downloads 96
107 Effect of Lifestyle Modification for Two Years on Obesity and Metabolic Syndrome Components in Elementary Students: A Community-Based Trial

Authors: Bita Rabbani, Hossein Chiti, Faranak Sharifi, Saeedeh Mazloomzadeh

Abstract:

Background: Lifestyle modifications, especially improving nutritional patterns and increasing physical activity, are the most important factors in preventing obesity and metabolic syndrome in children and adolescents. For this purpose, the following interventional study was designed to investigate the effects of educational programs for students, as well as changes in diet and physical activity, on obesity and components of the metabolic syndrome. Methods: This study is part of an interventional research project (elementary school) conducted on all students of Sama schools in Zanjan and Abhar in three levels of elementary, middle, and high school, including 1000 individuals in Zanjan (intervention group) and 1000 individuals (control group) in Abhar in 2011. Interventions were based on educating students, teachers, and parents, changes in food services, and physical activity. We primarily measured anthropometric indices, fasting blood sugar, lipid profiles, and blood pressure and completed standard nutrition and physical activity questionnaires. Also, blood insulin levels were randomly measured in a number of students. Data analysis was done by SPSS software version 16.0. Results: Overall, 589 individuals (252 male, 337 female) entered the case group, and 803 individuals (344 male, 459 female) entered the control group. After two years of intervention, mean waist circumference (63.8 ± 10.9) and diastolic BP (63.8 ± 10.4) were significantly lower; however, mean systolic BP (10.1.0 ± 12.5), food score (25.0 ± 5.0) and drinking score (12.1 ± 2.3) were higher in the intervention group (p<0.001). Comparing components of metabolic syndrome between the second year and at time of recruitment within the intervention group showed that although number of overweight/obese individuals, individuals with hypertriglyceridemia and high LDL increased, abdominal obesity, high BP, hyperglycemia, and insulin resistance decreased (p<0.001). On the other hand, in the control group, number of individuals with high BP increased significantly. Conclusion: The prevalence of abdominal obesity and hypertension, which are two major components of metabolic syndrome, are much higher in our study than in other regions of country. However, interventions for modification of diet and increase in physical activity are effective in lowering their prevalence.

Keywords: metabolic syndrome, obesity, life style, nutrition, hypertension

Procedia PDF Downloads 44
106 Assessment of Impact of Urbanization in Drainage Urban Systems, Cali-Colombia

Authors: A. Caicedo Padilla, J. Zambrano Nájera

Abstract:

Cali, the capital of Valle del Cauca and the second city of Colombia, is located in the Cauca River Valley between the Western and Central Cordillera that is South West of the country. The topography of the city is mainly flat, but it is possibly to find mountains in the west. The city has increased urbanization during XX century, especially since 1958 when started a rapid growth due to migration of people from other parts of the region. Much of that population has settled in eastern of Cali, an area originally intended for cane cultivation and a zone of flood from Cauca River and its tributaries. Due to the unplanned migration, settling was inadequate and produced changes in natural dynamics of the basins, which has resulted in increases in runoff volumes, peak flows and flow velocities, that in turn increases flood risk. Sewerage networks capacity were not enough for this higher runoff volume, because in first term they were not adequately designed and built, causing its failure. This in turn generates increasingly recurrent floods generating considerable effects on the economy and development of normal activities in Cali. Thus, it becomes very important to know hydrological behavior of Urban Watersheds. This research aims to determine the impact of urbanization on hydrology of watersheds with very low slopes. The project aims to identify changes in natural drainage patterns caused by the changes made on landscape. From the identification of such modifications it will be defined the most critical areas due to recurring flood events in the city of Cali. Critical areas are defined as areas where the sewerage system does not work properly as surface runoff increases considerable with storm events, and floods are recurrent. The assessment will be done from the analysis of Geographic Information Systems (GIS) theme layers from CVC Environmental Institution of Regional Control in Valle del Cauca, hydrological data and disaster database developed by OSSO Corporation. Rainfall data from a network and historical stream flow data will be used for analysis of historical behavior and change of precipitation and hydrological response according to homogeneous zones characterized by EMCALI S.A. public utility enterprise of Cali in 1999.

Keywords: drainage systems, land cover changes, urban hydrology, urban planning

Procedia PDF Downloads 231
105 Impact of Alternative Fuel Feeding on Fuel Cell Performance and Durability

Authors: S. Rodosik, J. P. Poirot-Crouvezier, Y. Bultel

Abstract:

With the expansion of the hydrogen economy, Proton Exchange Membrane Fuel Cell (PEMFC) systems are often presented as promising energy converters suitable for transport applications. However, reaching a durability of 5000 h recommended by the U.S. Department of Energy and decreasing system cost are still major hurdles to their development. In order to increase the system efficiency and simplify the system without affecting the fuel cell lifetime, an architecture called alternative fuel feeding has been developed. It consists in a fuel cell stack divided into two parts, alternatively fed, implemented on a 5-kW system for real scale testing. The operation strategy can be considered close to Dead End Anode (DEA) with specific modifications to avoid water and nitrogen accumulation in the cells. The two half-stacks are connected in series to enable each stack to be alternatively fed. Water and nitrogen accumulated can be shifted from one half-stack to the other one according to the alternative feeding frequency. Thanks to the homogenization of water vapor along the stack, water management was improved. The operating conditions obtained at system scale are close to recirculation without the need of a pump or an ejector. In a first part, a performance comparison with the DEA strategy has been performed. At high temperature and low pressure (80°C, 1.2 bar), performance of alternative fuel feeding was higher, and the system efficiency increased. In a second part, in order to highlight the benefits of the architecture on the fuel cell lifetime, two durability tests, lasting up to 1000h, have been conducted. A test on the 5-kW system has been compared to a reference test performed on a test bench with a shorter stack, conducted with well-controlled operating parameters and flow-through hydrogen strategy. The durability test is based upon the Fuel Cell Dynamic Load Cycle (FC-DLC) protocol but adapted to the system limitations: without OCV steps and a maximum current density of 0.4 A/cm². In situ local measurements with a segmented S++® plate performed all along the tests, showed a more homogeneous distribution of the current density with alternative fuel feeding than in flow-through strategy. Tests performed in this work enabled the understanding of this architecture advantages and drawbacks. Alternative fuel feeding architecture appeared to be a promising solution to ensure the humidification function at the anode side with a simplified fuel cell system.

Keywords: automotive conditions, durability, fuel cell system, proton exchange membrane fuel cell, stack architecture

Procedia PDF Downloads 116
104 Triazenes: Unearthing Their Hidden Arsenal Against Malaria and Microbial Menace

Authors: Frans J. Smit, Wisdom A. Munzeiwa, Hermanus C. M. Vosloo, Lyn-Marie Birkholtz, Richard K. Haynes

Abstract:

Malaria and antimicrobial infections remain significant global health concerns, necessitating the continuous search for novel therapeutic approaches. This abstract presents an overview of the potential use of triazenes as effective agents against malaria and various antimicrobial pathogens. Triazenes are a class of compounds characterized by a linear arrangement of three nitrogen atoms, rendering them structurally distinct from their cyclic counterparts. This study investigates the efficacy of triazenes against malaria and explores their antimicrobial activity. Preliminary results revealed significant antimalarial activity of the triazenes, as evidenced by in vitro screening against P. falciparum, the causative agent of malaria. Furthermore, the compounds exhibited broad-spectrum antimicrobial activity, indicating their potential as effective antimicrobial agents. These compounds have shown inhibitory effects on various essential enzymes and processes involved in parasite survival, replication, and transmission. The mechanism of action of triazenes against malaria involves interactions with critical molecular targets, such as enzymes involved in the parasite's metabolic pathways and proteins responsible for host cell invasion. The antimicrobial activity of the triazenes against bacteria and fungi was investigated through disc diffusion screening. The antimicrobial efficacy of triazenes has been observed against both Gram-positive and Gram-negative bacteria, as well as multidrug-resistant strains, making them potential candidates for combating drug-resistant infections. Furthermore, triazenes possess favourable physicochemical properties, such as good stability, solubility, and low toxicity, which are essential for drug development. The structural versatility of triazenes allows for the modification of their chemical composition to enhance their potency, selectivity, and pharmacokinetic properties. These modifications can be tailored to target specific pathogens, increasing the potential for personalized treatment strategies. In conclusion, this study highlights the potential of triazenes as promising candidates for the development of novel antimalarial and antimicrobial therapeutics. Further investigations are necessary to determine the structure-activity relationships and optimize the pharmacological properties of these compounds. The results warrant additional research, including MIC studies, to further explore the antimicrobial activity of the triazenes. Ultimately, these findings contribute to the development of more effective strategies for combating malaria and microbial infections.

Keywords: malaria, anti-microbials, triazene, resistance

Procedia PDF Downloads 72
103 TARF: Web Toolkit for Annotating RNA-Related Genomic Features

Authors: Jialin Ma, Jia Meng

Abstract:

Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.

Keywords: RNA-related genomic features, annotation, visualization, web server

Procedia PDF Downloads 183
102 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 92
101 Remote Criminal Proceedings as Implication to Rethink the Principles of Criminal Procedure

Authors: Inga Žukovaitė

Abstract:

This paper aims to present postdoc research on remote criminal proceedings in court. In this period, when most countries have introduced the possibility of remote criminal proceedings in their procedural laws, it is not only possible to identify the weaknesses and strengths of the legal regulation but also assess the effectiveness of the instrument used and to develop an approach to the process. The example of some countries (for example, Italy) shows, on the one hand, that criminal procedure, based on orality and immediacy, does not lend itself to easy modifications that pose even a slight threat of devaluation of these principles in a society with well-established traditions of this procedure. On the other hand, such strong opposition and criticism make us ask whether we are facing the possibility of rethinking the traditional ways to understand the safeguards in order to preserve their essence without devaluing their traditional package but looking for new components to replace or compensate for the so-called “loss” of safeguards. The reflection on technological progress in the field of criminal procedural law indicates the need to rethink, on the basis of fundamental procedural principles, the safeguards that can replace or compensate for those that are in crisis as a result of the intervention of technological progress. Discussions in academic doctrine on the impact of technological interventions on the proceedings as such or on the limits of such interventions refer to the principles of criminal procedure as to a point of reference. In the context of the inferiority of technology, scholarly debate still addresses the issue of whether the court will not gradually become a mere site for the exercise of penal power with the resultant consequences – the deformation of the procedure itself as a physical ritual. In this context, this work seeks to illustrate the relationship between remote criminal proceedings in court and the principle of immediacy, the concept of which is based on the application of different models of criminal procedure (inquisitorial and adversarial), the aim is to assess the challenges posed for legal regulation by the interaction of technological progress with the principles of criminal procedure. The main hypothesis to be tested is that the adoption of remote proceedings is directly linked to the prevailing model of criminal procedure, arguing that the more principles of the inquisitorial model are applied to the criminal process, the more remote criminal trial is acceptable, and conversely, the more the criminal process is based on an adversarial model, more the remote criminal process is seen as incompatible with the principle of immediacy. In order to achieve this goal, the following tasks are set: to identify whether there is a difference in assessing remote proceedings with the immediacy principle between the adversarial model and the inquisitorial model, to analyse the main aspects of the regulation of remote criminal proceedings based on the examples of different countries (for example Lithuania, Italy, etc.).

Keywords: remote criminal proceedings, principle of orality, principle of immediacy, adversarial model inquisitorial model

Procedia PDF Downloads 40
100 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 245
99 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment

Authors: Muhammad Muzamal, Anders Rasmuson

Abstract:

Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.

Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment

Procedia PDF Downloads 371
98 The Role of Rapid Maxillary Expansion in Managing Obstructive Sleep Apnea in Children: A Literature Review

Authors: Suleman Maliha, Suleman Sidra

Abstract:

Obstructive sleep apnea (OSA) is a sleep disorder that can result in behavioral and psychomotor impairments in children. The classical treatment modalities for OSA have been continuous positive airway pressure and adenotonsillectomy. However, orthodontic intervention through rapid maxillary expansion (RME) has also been commonly used to manage skeletal transverse maxillary discrepancies. Aim and objectives: The aim of this study is to determine the efficacy of rapid maxillary expansion in paediatric patients with obstructive sleep apnea by assessing pre and post-treatment mean apnea-hypopnea index (AHI) and oxygen saturations. Methodology: Literature was identified through a rigorous search of the Embase, Pubmed, and CINAHL databases. Articles published from 2012 onwards were selected. The inclusion criteria consisted of patients aged 18 years and under with no systemic disease, adenotonsillar surgery, or hypertrophy who are undergoing RME with AHI measurements before and after treatment. In total, six suitable papers were identified. Results: Three studies assessed patients pre and post-RME at 12 months. The first study consisted of 15 patients with an average age of 7.5 years. Following treatment, they found that RME resulted in both higher oxygen saturations (+ 5.3%) and improved AHI (- 4.2 events). The second study assessed 11 patients aged 5–8 years and also noted improvements, with mean AHI reduction from 6.1 to 2.4 and oxygen saturations increasing from 93.1% to 96.8%. The third study reviewed 14 patients aged 6–9 years and similarly found an AHI reduction from 5.7 to 4.4 and an oxygen saturation increase from 89.8% to 95.5%. All modifications noted in these studies were statistically significant. A long-term study reviewed 23 patients aged 6–12 years post-RME treatment on an annual basis for 12 years. They found that the mean AHI reduced from 12.2 to 0.4, with improved oxygen saturations from 78.9% to 95.1%. Another study assessed 19 patients aged 9-12 years at two months into RME and four months post-treatment. Improvements were also noted at both stages, with an overall reduction of the mean AHI from 16.3 to 0.8 and an overall increase in oxygen saturations from 77.9% to 95.4%. The final study assessed 26 children aged 7-11 years on completion of individual treatment and found an AHI reduction from 6.9 to 5.3. However, the oxygen saturation remained stagnant at 96.0%, but this was not clinically significant. Conclusion: Overall, the current evidence suggests that RME is a promising treatment option for paediatric patients with OSA. It can provide efficient and conservative treatment; however, early diagnosis is crucial. As there are various factors that could be contributing to OSA, it is important that each case is treated on its individual merits. Going forward, there is a need for more randomized control trials with larger cohorts being studied. Research into the long-term effects of RME and potential relapse amongst cases would also be useful.

Keywords: orthodontics, sleep apnea, maxillary expansion, review

Procedia PDF Downloads 56
97 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe

Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou

Abstract:

In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.

Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine

Procedia PDF Downloads 32