Search results for: molecular identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4708

Search results for: molecular identification

208 Development of an Improved Paradigm for the Tourism Sector in the Department of Huila, Colombia: A Theoretical and Empirical Approach

Authors: Laura N. Bolivar T.

Abstract:

The tourism importance for regional development is mainly highlighted by the collaborative, cooperating and competitive relationships of the involved agents. The fostering of associativity processes, in particular, the cluster approach emphasizes the beneficial outcomes from the concentration of enterprises, where innovation and entrepreneurship flourish and shape the dynamics for tourism empowerment. Considering the department of Huila, it is located in the south-west of Colombia and holds the biggest coffee production in the country, although it barely contributes to the national GDP. Hence, its economic development strategy is looking for more dynamism and Huila could be consolidated as a leading destination for cultural, ecological and heritage tourism, if at least the public policy making processes for the tourism management of La Tatacoa Desert, San Agustin Park and Bambuco’s National Festival, were implemented in a more efficient manner. In this order of ideas, this study attempts to address the potential restrictions and beneficial factors for the consolidation of the tourism sector of Huila-Colombia as a cluster and how could it impact its regional development. Therefore, a set of theoretical frameworks such as the Tourism Routes Approach, the Tourism Breeding Environment, the Community-based Tourism Method, among others, but also a collection of international experiences describing tourism clustering processes and most outstanding problematics, is analyzed to draw up learning points, structure of proceedings and success-driven factors to be contrasted with the local characteristics in Huila, as the region under study. This characterization involves primary and secondary information collection methods and comprises the South American and Colombian context together with the identification of involved actors and their roles, main interactions among them, major tourism products and their infrastructure, the visitors’ perspective on the situation and a recap of the related needs and benefits regarding the host community. Considering the umbrella concepts, the theoretical and the empirical approaches, and their comparison with the local specificities of the tourism sector in Huila, an array of shortcomings is analytically constructed and a series of guidelines are proposed as a way to overcome them and simultaneously, raise economic development and positively impact Huila’s well-being. This non-exhaustive bundle of guidelines is focused on fostering cooperating linkages in the actors’ network, dealing with Information and Communication Technologies’ innovations, reinforcing the supporting infrastructure, promoting the destinations considering the less known places as well, designing an information system enabling the tourism network to assess the situation based on reliable data, increasing competitiveness, developing participative public policy-making processes and empowering the host community about the touristic richness. According to this, cluster dynamics would drive the tourism sector to meet articulation and joint effort, then involved agents and local particularities would be adequately assisted to cope with the current changing environment of globalization and competition.

Keywords: innovative strategy, local development, network of tourism actors, tourism cluster

Procedia PDF Downloads 127
207 Immunoliposome-Mediated Drug Delivery to Plasmodium-Infected and Non-Infected Red Blood Cells as a Dual Therapeutic/Prophylactic Antimalarial Strategy

Authors: Ernest Moles, Patricia Urbán, María Belén Jiménez-Díaz, Sara Viera-Morilla, Iñigo Angulo-Barturen, Maria Antònia Busquets, Xavier Fernàndez-Busquets

Abstract:

Bearing in mind the absence of an effective vaccine against malaria and its severe clinical manifestations causing nearly half a million deaths every year, this disease represents nowadays a major threat to life. Besides, the basic rationale followed by currently marketed antimalarial approaches is based on the administration of drugs on their own, promoting the emergence of drug-resistant parasites owing to the limitation in delivering drug payloads into the parasitized erythrocyte high enough to kill the intracellular pathogen while minimizing the risk of causing toxic side effects to the patient. Such dichotomy has been successfully addressed through the specific delivery of immunoliposome (iLP)-encapsulated antimalarials to Plasmodium falciparum-infected red blood cells (pRBCs). Unfortunately, this strategy has not progressed towards clinical applications, whereas in vitro assays rarely reach drug efficacy improvements above 10-fold. Here, we show that encapsulation efficiencies reaching >96% can be achieved for the weakly basic drugs chloroquine (CQ) and primaquine using the pH gradient active loading method in liposomes composed of neutrally charged, saturated phospholipids. Targeting antibodies are best conjugated through their primary amino groups, adjusting chemical crosslinker concentration to retain significant antigen recognition. Antigens from non-parasitized RBCs have also been considered as targets for the intracellular delivery of drugs not affecting the erythrocytic metabolism. Using this strategy, we have obtained unprecedented nanocarrier targeting to early intraerythrocytic stages of the malaria parasite for which there is a lack of specific extracellular molecular tags. Polyethylene glycol-coated liposomes conjugated with monoclonal antibodies specific for the erythrocyte surface protein glycophorin A (anti-GPA iLP) were capable of targeting 100% RBCs and pRBCs at the low concentration of 0.5 μM total lipid in the culture, with >95% of added iLPs retained into the cells. When exposed for only 15 min to P. falciparum in vitro cultures synchronized at early stages, free CQ had no significant effect over parasite viability up to 200 nM drug, whereas iLP-encapsulated 50 nM CQ completely arrested its growth. Furthermore, when assayed in vivo in P. falciparum-infected humanized mice, anti-GPA iLPs cleared the pathogen below detectable levels at a CQ dose of 0.5 mg/kg. In comparison, free CQ administered at 1.75 mg/kg was, at most, 40-fold less efficient. Our data suggest that this significant improvement in drug antimalarial efficacy is in part due to a prophylactic effect of CQ found by the pathogen in its host cell right at the very moment of invasion.

Keywords: immunoliposomal nanoparticles, malaria, prophylactic-therapeutic polyvalent activity, targeted drug delivery

Procedia PDF Downloads 355
206 Capability of a Single Antigen to Induce Both Protective and Disease Enhancing Antibody: An Obstacle in the Creation of Vaccines and Passive Immunotherapies

Authors: Parul Kulshreshtha, Subrata Sinha, Rakesh Bhatnagar

Abstract:

This study was conducted by taking B. anthracis as a model pathogen. On infecting a host, B. anthracis secretes three proteins, namely, protective antigen (PA, 83kDa), edema factor (EF, 89 kDa) and lethal factor (LF, 90 kDa). These three proteins are the components of two anthrax toxins. PA binds to the cell surface receptors, namely, tumor endothelial marker (TEM) 8 and capillary morphogenesis protein (CMG) 2. TEM8 and CMG2 interact with LDL-receptor related protein (LRP) 6 for endocytosis of EF and LF. On entering the cell, EF acts as a calmodulin-dependent adenylate cyclase that causes a prolonged increase of cytosolic cyclic adenosine monophosphate (cAMP). LF is a metalloprotease that cleaves most isoforms of mitogen-activated protein kinase kinases (MAPKK/MEK) close to their N-terminus. By secreting these two toxins, B.anthracis ascertains death of the host. Once the systemic levels of the toxins rise, antibiotics alone cannot save the host. Therefore, toxin-specific inhibitors have to be developed. In this wake, monoclonal antibodies have been developed for the neutralization of toxic effects of anthrax toxins. We created hybridomas by using spleen of mice that were actively immunized with rLFn (recombinant N-terminal domain of lethal factor of B. anthracis) to obtain anti-toxin antibodies. Later on, separate group of mice were immunized with rLFn to obtain a polyclonal control for passive immunization studies of monoclonal antibodies. This led to the identification of one cohort of rLFn-immunized mice that harboured disease-enhancing polyclonal antibodies. At the same time, the monoclonal antibodies from all the hybridomas were being tested. Two hybridomas secreted monoclonal antibodies (H8 and H10) that were cross-reactive with EF (edema factor) and LF (lethal factor), while the other two hybridomas secreted LF-specific antibodies (H7 and H11). The protective efficacy of H7, H8, H10 and H11 was investigated. H7, H8 and H10 were found to be protective. H11 was found to have disease enhancing characteristics in-vitro and in mouse model of challenge with B. anthracis. In this study the disease enhancing character of H11 monoclonal antibody and anti-rLFn polyclonal sera was investigated. Combination of H11 with protective monoclonal antibodies (H8 and H10) reduced its disease enhancing nature both in-vitro and in-vivo. But combination of H11 with LETscFv (an scFv with VH and VL identical to H10 but lacking Fc region) could not abrogate the disease-enhancing character of H11 mAb. Therefore it was concluded that for suppression of disease enhancement, Fc portion was absolutely essential for interaction of H10 with H11. Our study indicates that the protective potential of an antibody depends equally on its idiotype/ antigen specificity and its isotype. A number of monoclonal and engineered antibodies are being explored as immunotherapeutics but it is absolutely essential to characterize each one for their individual and combined protective potential. Although new in the sphere of toxin-based diseases, it is extremely important to characterize the disease-enhancing nature of polyclonal as well as monoclonal antibodies. This is because several anti-viral therapeutics and vaccines have failed in the face of this phenomenon. The passive –immunotherapy thus needs to be well formulated to avoid any contraindications.

Keywords: immunotherapy, polyclonal, monoclonal, antibody-dependent disease enhancement

Procedia PDF Downloads 369
205 Identification and Characterization of Novel Genes Involved in Quinone Synthesis in the Odoriferous Defensive Stink Glands of the Red Flour Beetle, Tribolium castaneum

Authors: B. Atika, S. Lehmann, E. Wimmer

Abstract:

The defense strategy is very common in the insect world. Defensive substances play a wide variety of functions for beetles, such as repellents, toxicants, insecticides, and antimicrobics. Beetles react to predators, invaders, and parasitic microbes with the release of toxic and repellent substances. Defensive substances are directed against a large array of potential target organisms or may function for boiling bombardment or as surfactants. Usually, Coleoptera biosynthesize and store their defensive compounds in a complex secretory organ, known as odoriferous defensive stink glands. The red flour beetle, Tribolium castaneum (Coleoptera: Tenebrionidae), uses these glands to produce antimicrobial p-benzoquinones and 1-alkenes. In the past, the morphology of stink gland has been studied in detail in tenebrionid beetles; however, very little is known about the genes that are involved in the production of gland secretion. In this study, we studied a subset of genes that are essential for the benzoquinone production in red flour beetle. In the first phase, we selected 74 potential candidate genes from a genome-wide RNA interference (RNAi) knockdown screen named 'iBeetle.' All these 74 candidate genes were functionally characterized by RNAi-mediated gene knockdown. Therefore, they were selected for a subsequent gas chromatography-mass spectrometry (GC-MS) analysis of secretion volatiles in respective RNAi knockdown glands. 33 of them were observed to alter the phenotype of stink gland. In the GC-MS analysis, 7 candidate genes were noted to display a strongly altered gland, in terms of secretion color and chemical composition, upon knockdown, showing their key role in the biosynthesis of gland secretion. Morphologically altered stink glands were found for odorant receptor and protein kinase superfamily. Subsequent GC-MS analysis of secretion volatiles revealed reduced benzoquinone levels in LIM domain, PDZ domain, PBP/GOBP family knockdowns and a complete lack of benzoquinones in the knockdown of sulfatase-modifying factor enzyme 1, sulfate transporter family. Based on stink gland transcriptome data, we analyzed the function of sulfatase-modifying factor enzyme 1 and sulfate transporter family via RNAi-mediated gene knockdowns, GC-MS, in situ hybridization, and enzymatic activity assays. Morphologically altered stink glands were noted in knockdown of both these genes. Furthermore, GC-MS analysis of secretion volatiles showed a complete lack of benzoquinones in the knockdown of these two genes. In situ hybridization showed that these two genes are expressed around the vesicle of certain subgroup of secretory stink gland cells. Enzymatic activity assays on stink gland tissue showed that these genes are involved in p-benzoquinone biosynthesis. These results suggest that sulfatase-modifying factor enzyme 1 and sulfate transporter family play a role specifically in benzoquinone biosynthesis in red flour beetles.

Keywords: Red Flour Beetle, defensive stink gland, benzoquinones, sulfate transporter, sulfatase-modifying factor enzyme 1

Procedia PDF Downloads 138
204 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 292
203 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 251
202 Organizational Resilience in the Perspective of Supply Chain Risk Management: A Scholarly Network Analysis

Authors: William Ho, Agus Wicaksana

Abstract:

Anecdotal evidence in the last decade shows that the occurrence of disruptive events and uncertainties in the supply chain is increasing. The coupling of these events with the nature of an increasingly complex and interdependent business environment leads to devastating impacts that quickly propagate within and across organizations. For example, the recent COVID-19 pandemic increased the global supply chain disruption frequency by at least 20% in 2020 and is projected to have an accumulative cost of $13.8 trillion by 2024. This crisis raises attention to organizational resilience to weather business uncertainty. However, the concept has been criticized for being vague and lacking a consistent definition, thus reducing the significance of the concept for practice and research. This study is intended to solve that issue by providing a comprehensive review of the conceptualization, measurement, and antecedents of operational resilience that have been discussed in the supply chain risk management literature (SCRM). We performed a Scholarly Network Analysis, combining citation-based and text-based approaches, on 252 articles published from 2000 to 2021 in top-tier journals based on three parameters: AJG ranking and ABS ranking, UT Dallas and FT50 list, and editorial board review. We utilized a hybrid scholarly network analysis by combining citation-based and text-based approaches to understand the conceptualization, measurement, and antecedents of operational resilience in the SCRM literature. Specifically, we employed a Bibliographic Coupling Analysis in the research cluster formation stage and a Co-words Analysis in the research cluster interpretation and analysis stage. Our analysis reveals three major research clusters of resilience research in the SCRM literature, namely (1) supply chain network design and optimization, (2) organizational capabilities, and (3) digital technologies. We portray the research process in the last two decades in terms of the exemplar studies, problems studied, commonly used approaches and theories, and solutions provided in each cluster. We then provide a conceptual framework on the conceptualization and antecedents of resilience based on studies in these clusters and highlight potential areas that need to be studied further. Finally, we leverage the concept of abnormal operating performance to propose a new measurement strategy for resilience. This measurement overcomes the limitation of most current measurements that are event-dependent and focus on the resistance or recovery stage - without capturing the growth stage. In conclusion, this study provides a robust literature review through a scholarly network analysis that increases the completeness and accuracy of research cluster identification and analysis to understand conceptualization, antecedents, and measurement of resilience. It also enables us to perform a comprehensive review of resilience research in SCRM literature by including research articles published during the pandemic and connects this development with a plethora of articles published in the last two decades. From the managerial perspective, this study provides practitioners with clarity on the conceptualization and critical success factors of firm resilience from the SCRM perspective.

Keywords: supply chain risk management, organizational resilience, scholarly network analysis, systematic literature review

Procedia PDF Downloads 54
201 Defective Autophagy Disturbs Neural Migration and Network Activity in hiPSC-Derived Cockayne Syndrome B Disease Models

Authors: Julia Kapr, Andrea Rossi, Haribaskar Ramachandran, Marius Pollet, Ilka Egger, Selina Dangeleit, Katharina Koch, Jean Krutmann, Ellen Fritsche

Abstract:

It is widely acknowledged that animal models do not always represent human disease. Especially human brain development is difficult to model in animals due to a variety of structural and functional species-specificities. This causes significant discrepancies between predicted and apparent drug efficacies in clinical trials and their subsequent failure. Emerging alternatives based on 3D in vitro approaches, such as human brain spheres or organoids, may in the future reduce and ultimately replace animal models. Here, we present a human induced pluripotent stem cell (hiPSC)-based 3D neural in a vitro disease model for the Cockayne Syndrome B (CSB). CSB is a rare hereditary disease and is accompanied by severe neurologic defects, such as microcephaly, ataxia and intellectual disability, with currently no treatment options. Therefore, the aim of this study is to investigate the molecular and cellular defects found in neural hiPSC-derived CSB models. Understanding the underlying pathology of CSB enables the development of treatment options. The two CSB models used in this study comprise a patient-derived hiPSC line and its isogenic control as well as a CSB-deficient cell line based on a healthy hiPSC line (IMR90-4) background thereby excluding genetic background-related effects. Neurally induced and differentiated brain sphere cultures were characterized via RNA Sequencing, western blot (WB), immunocytochemistry (ICC) and multielectrode arrays (MEAs). CSB-deficiency leads to an altered gene expression of markers for autophagy, focal adhesion and neural network formation. Cell migration was significantly reduced and electrical activity was significantly increased in the disease cell lines. These data hint that the cellular pathologies is possibly underlying CSB. By induction of autophagy, the migration phenotype could be partially rescued, suggesting a crucial role of disturbed autophagy in defective neural migration of the disease lines. Altered autophagy may also lead to inefficient mitophagy. Accordingly, disease cell lines were shown to have a lower mitochondrial base activity and a higher susceptibility to mitochondrial stress induced by rotenone. Since mitochondria play an important role in neurotransmitter cycling, we suggest that defective mitochondria may lead to altered electrical activity in the disease cell lines. Failure to clear the defective mitochondria by mitophagy and thus missing initiation cues for new mitochondrial production could potentiate this problem. With our data, we aim at establishing a disease adverse outcome pathway (AOP), thereby adding to the in-depth understanding of this multi-faced disorder and subsequently contributing to alternative drug development.

Keywords: autophagy, disease modeling, in vitro, pluripotent stem cells

Procedia PDF Downloads 105
200 An Aptasensor Based on Magnetic Relaxation Switch and Controlled Magnetic Separation for the Sensitive Detection of Pseudomonas aeruginosa

Authors: Fei Jia, Xingjian Bai, Xiaowei Zhang, Wenjie Yan, Ruitong Dai, Xingmin Li, Jozef Kokini

Abstract:

Pseudomonas aeruginosa is a Gram-negative, aerobic, opportunistic human pathogen that is present in the soil, water, and food. This microbe has been recognized as a representative food-borne spoilage bacterium that can lead to many types of infections. Considering the casualties and property loss caused by P. aeruginosa, the development of a rapid and reliable technique for the detection of P. aeruginosa is crucial. The whole-cell aptasensor, an emerging biosensor using aptamer as a capture probe to bind to the whole cell, for food-borne pathogens detection has attracted much attention due to its convenience and high sensitivity. Here, a low-field magnetic resonance imaging (LF-MRI) aptasensor for the rapid detection of P. aeruginosa was developed. The basic detection principle of the magnetic relaxation switch (MRSw) nanosensor lies on the ‘T₂-shortening’ effect of magnetic nanoparticles in NMR measurements. Briefly speaking, the transverse relaxation time (T₂) of neighboring water protons get shortened when magnetic nanoparticles are clustered due to the cross-linking upon the recognition and binding of biological targets, or simply when the concentration of the magnetic nanoparticles increased. Such shortening is related to both the state change (aggregation or dissociation) and the concentration change of magnetic nanoparticles and can be detected using NMR relaxometry or MRI scanners. In this work, two different sizes of magnetic nanoparticles, which are 10 nm (MN₁₀) and 400 nm (MN₄₀₀) in diameter, were first immobilized with anti- P. aeruginosa aptamer through 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC)/N-hydroxysuccinimide (NHS) chemistry separately, to capture and enrich the P. aeruginosa cells. When incubating with the target, a ‘sandwich’ (MN₁₀-bacteria-MN₄₀₀) complex are formed driven by the bonding of MN400 with P. aeruginosa through aptamer recognition, as well as the conjugate aggregation of MN₁₀ on the surface of P. aeruginosa. Due to the different magnetic performance of the MN₁₀ and MN₄₀₀ in the magnetic field caused by their different saturation magnetization, the MN₁₀-bacteria-MN₄₀₀ complex, as well as the unreacted MN₄₀₀ in the solution, can be quickly removed by magnetic separation, and as a result, only unreacted MN₁₀ remain in the solution. The remaining MN₁₀, which are superparamagnetic and stable in low field magnetic field, work as a signal readout for T₂ measurement. Under the optimum condition, the LF-MRI platform provides both image analysis and quantitative detection of P. aeruginosa, with the detection limit as low as 100 cfu/mL. The feasibility and specificity of the aptasensor are demonstrated in detecting real food samples and validated by using plate counting methods. Only two steps and less than 2 hours needed for the detection procedure, this robust aptasensor can detect P. aeruginosa with a wide linear range from 3.1 ×10² cfu/mL to 3.1 ×10⁷ cfu/mL, which is superior to conventional plate counting method and other molecular biology testing assay. Moreover, the aptasensor has a potential to detect other bacteria or toxins by changing suitable aptamers. Considering the excellent accuracy, feasibility, and practicality, the whole-cell aptasensor provides a promising platform for a quick, direct and accurate determination of food-borne pathogens at cell-level.

Keywords: magnetic resonance imaging, meat spoilage, P. aeruginosa, transverse relaxation time

Procedia PDF Downloads 134
199 Implementation of Synthesis and Quality Control Procedures of ¹⁸F-Fluoromisonidazole Radiopharmaceutical

Authors: Natalia C. E. S. Nascimento, Mercia L. Oliveira, Fernando R. A. Lima, Leonardo T. C. do Nascimento, Marina B. Silveira, Brigida G. A. Schirmer, Andrea V. Ferreira, Carlos Malamut, Juliana B. da Silva

Abstract:

Tissue hypoxia is a common characteristic of solid tumors leading to decreased sensitivity to radiotherapy and chemotherapy. In the clinical context, tumor hypoxia assessment employing the positron emission tomography (PET) tracer ¹⁸F-fluoromisonidazole ([¹⁸F]FMISO) is helpful for physicians for planning and therapy adjusting. The aim of this work was to implement the synthesis of 18F-FMISO in a TRACERlab® MXFDG module and also to establish the quality control procedure. [¹⁸F]FMISO was synthesized at Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN/Brazil) using an automated synthesizer (TRACERlab® MXFDG, GE) adapted for the production of [¹⁸F]FMISO. The FMISO chemical standard was purchased from ABX. 18O- enriched water was acquired from Center of Molecular Research. Reagent kits containing eluent solution, acetonitrile, ethanol, 2.0 M HCl solution, buffer solution, water for injections and [¹⁸F]FMISO precursor (dissolved in 2 ml acetonitrile) were purchased from ABX. The [¹⁸F]FMISO samples were purified by Solid Phase Extraction method. The quality requirements of [¹⁸F]FMISO are established in the European Pharmacopeia. According to that reference, quality control of [¹⁸F]FMISO should include appearance, pH, radionuclidic identity and purity, radiochemical identity and purity, chemical purity, residual solvents, bacterial endotoxins, and sterility. The duration of the synthesis process was 53 min, with radiochemical yield of (37.00 ± 0.01) % and the specific activity was more than 70 GBq/µmol. The syntheses were reproducible and showed satisfactory results. In relation to the quality control analysis, the samples were clear and colorless at pH 6.0. The spectrum emission, measured by using a High-Purity Germanium Detector (HPGe), presented a single peak at 511 keV and the half-life, determined by the decay method in an activimeter, was (111.0 ± 0.5) min, indicating no presence of radioactive contaminants, besides the desirable radionuclide (¹⁸F). The samples showed concentration of tetrabutylammonium (TBA) < 50μg/mL, assessed by visual comparison to TBA standard applied in the same thin layer chromatographic plate. Radiochemical purity was determined by high performance liquid chromatography (HPLC) and the results were 100%. Regarding the residual solvents tested, ethanol and acetonitrile presented concentration lower than 10% and 0.04%, respectively. Healthy female mice were injected via lateral tail vein with [¹⁸F]FMISO, microPET imaging studies (15 min) were performed after 2 h post injection (p.i), and the biodistribution was analyzed in five-time points (30, 60, 90, 120 and 180 min) after injection. Subsequently, organs/tissues were assayed for radioactivity with a gamma counter. All parameters of quality control test were in agreement to quality criteria confirming that [¹⁸F]FMISO was suitable for use in non-clinical and clinical trials, following the legal requirements for the production of new radiopharmaceuticals in Brazil.

Keywords: automatic radiosynthesis, hypoxic tumors, pharmacopeia, positron emitters, quality requirements

Procedia PDF Downloads 178
198 Gas Chromatographic: Mass Spectroscopic Analysis of Citrus reticulata Fruit Peel, Zingiber officinale Rhizome, and Sesamum indicum Seed Ethanolic Extracts Possessing Antioxidant Activity and Lipid Profile Effects

Authors: Samar Saadeldin Abdelmotalab Omer, Ikram Mohamed Eltayeb Elsiddig, Saad Mohammed Hussein Ayoub

Abstract:

A variety of herbal medicinal plants are known to confer beneficial effects in regards to modification of cardiovascular ri’=sk factors. The anti-hypercholesterolaemic and antioxidant activities of the crude ethanolic extracts of Citrus reticulate fruit peel, Zingiber officinale rhizome and Sesamum indicum seed extracts have been demonstrated. These plants are assumed to possess biologically active principles, which impart their pharmacologic activities. GC-MS analysis of the ethanolic extracts was carried out to identify the active principles and their percentages of occurrence in the analytes. Analysis of the extracts was carried out using (GS-MS QP) type Schimadzu 2010 equipped with a capillary column RTX-50 (restec), (length 30mm, diameter 0.25mm, and thickness 0.25mm). Helium was used as a carrier gas, the temperature was programmed at 200°C for 5 minutes at a rate of 15ml/minute, and the extracts were injected using split injection mode. The identification of different components was achieved from their Mass Spectra and Retention time, compared with those in the NIST library. The results revealed the presence of 80 compounds in Sudanese locally grown C. reticulata fruit peel extract, most of which were monoterpenoid compounds including Limonene (3.03%), Alpha & Gamma - terpinenes (2.61%), Linalool (1.38%), Citral (1.72%) which are known to have profound antioxidant effects. The Sesquiterpenoids Humulene (0.26%) and Caryophyllene (1.97%) were also identified, the latter known to have profound anti-anxiety and anti-depressant activity in addition to the beneficiary effects in lipid regulation. The analysis of the locally grown S. indicum oily and water soluble portions of seed extract revealed the presence of a total of 64 compounds with considerably high percentage of the mono-unsaturated fatty acid ester methyl oleate (66.99%) in addition to methyl stearate (9.35%) and palmitate (15.71%) of oil portion, whereas, plant sterols including Gamma-sitosterol (13.5%), fucosterol (2.11%) and stigmasterol (1.95%) in addition to gamma-tocopherol (1.16%) were detected in extract water-soluble portion. The latter indicate various principles known to have valuable pharmacological benefits including antioxidant activities and beneficiary effects on intestinal cholesterol absorption and regulation of serum cholesterol levels. Z. officinale rhizome extract analysis revealed the presence of 93 compounds, the most abundant were alpha-zingeberine (16.5%), gingerol (9.25%), alpha-sesquiphellandrene (8.3%), zingerone (6.78%), beta-bisabolene (4.19%), alpha-farnesene (3.56%), ar-curcumene (3.29%), gamma-elemene (1.25%) and a variety of other compounds. The presence of these active principles reflected on the activity of the extract. Activity could be assigned to a single or a combination of two or more extract components. GC-MS analysis concluded the occurrence of compounds known to possess antioxidant activity and lipid profile effects.

Keywords: gas chromatography, indicum, officinale, reticulata

Procedia PDF Downloads 352
197 Visual Representation of Ancient Chinese Rites with Digitalization Technology: A Case of Confucius Worship Ceremony

Authors: Jihong Liang, Huiling Feng, Linqing Ma, Tianjiao Qi

Abstract:

Confucius is the first sage in Chinese culture. Confucianism, the theories represented by Confucius, has long been at the core of Chinese traditional society, as the dominating political ideology of centralized feudal monarchy for more than two thousand years. Confucius Worship Ceremony held in the Confucian Temple in Qufu (Confucius’s birthplace), which is dedicated to commemorate Confucius and other 170 elites in Confucianism with a whole set of formal rites, pertains to “Auspicious Rites”, which worship heaven and earth, humans and ghosts. It was first a medium-scaled ritual activity but then upgraded to the supreme one at national level in the Qing Dynasty. As a national event, it was celebrated by Emperor as well as common intellectuals in traditional China. The Ceremony can be solemn and respectful, with prescribed and complicated procedures, well-prepared utensil and matched offerings operated in rhythm with music and dances. Each participant has his place, and everyone follows the specified rules. This magnificent ritual Ceremony, while embedded with rich culture connotation, actually symbolizes the social acknowledgment for orthodox culture represented by Confucianism. Rites reflected in this Ceremony, is one of the most important features of Chinese culture, serving as the key bond in the identification and continuation of Chinese culture. These rites and ritual ceremonies, as culture memories themselves, are not only treasures of China, but of the whole world. However, while the ancient Chinese Rite has been one of the thorniest and most complicated topics for academics, the more regrettable is that due to their interruption in practice and historical changes, these rites and ritual ceremonies have already become a vague language in today’s academic discourse and strange terms of the past for common people. Luckily, we, today, by virtue of modern digital technology, may be able to reproduce these ritual ceremonies, as most of them can still be found in ancient manuscripts, through which Chinese ancestors tell the beauty and gravity of their dignified rites and more importantly, their spiritual pursuits with vivid language and lively pictures. This research, based on review and interpretation of the ancient literature, intends to construct the ancient ritual ceremonies, with the Confucius Worship Ceremony as a case and by use of digital technology. Using 3D technology, the spatial scenes in the Confucian Temple can be reconstructed by virtual reality; the memorial tablet exhibited in the temple by GIS and different rites in the ceremonies by animation technology. With reference to the lyrics, melodies and lively pictures recorded in ancient scripts, it is also possible to reproduce the live dancing site. Also, image rendering technology can help to show the life experience and accomplishments of Confucius. Finally, lining up all the elements in a multimedia narrative form, a complete digitalized Confucius Worship Ceremony can be reproduced, which will provide an excellent virtual experience that goes beyond time and space by bringing its audience back to that specific historical time. This digital project, once completed, will play an important role in the inheritance and dissemination of cultural heritage.

Keywords: Confucius worship ceremony, multimedia narrative form, GIS, visual representation

Procedia PDF Downloads 237
196 Predicting Suicidal Behavior by an Accurate Monitoring of RNA Editing Biomarkers in Blood Samples

Authors: Berengere Vire, Nicolas Salvetat, Yoann Lannay, Guillaume Marcellin, Siem Van Der Laan, Franck Molina, Dinah Weissmann

Abstract:

Predicting suicidal behaviors is one of the most complex challenges of daily psychiatric practices. Today, suicide risk prediction using biological tools is not validated and is only based on subjective clinical reports of the at-risk individual. Therefore, there is a great need to identify biomarkers that would allow early identification of individuals at risk of suicide. Alterations of adenosine-to-inosine (A-to-I) RNA editing of neurotransmitter receptors and other proteins have been shown to be involved in etiology of different psychiatric disorders and linked to suicidal behavior. RNA editing is a co- or post-transcriptional process leading to a site-specific alteration in RNA sequences. It plays an important role in the epi transcriptomic regulation of RNA metabolism. On postmortem human brain tissue (prefrontal cortex) of depressed suicide victims, Alcediag found specific alterations of RNA editing activity on the mRNA coding for the serotonin 2C receptor (5-HT2cR). Additionally, an increase in expression levels of ADARs, the RNA editing enzymes, and modifications of RNA editing profiles of prime targets, such as phosphodiesterase 8A (PDE8A) mRNA, have also been observed. Interestingly, the PDE8A gene is located on chromosome 15q25.3, a genomic region that has recurrently been associated with the early-onset major depressive disorder (MDD). In the current study, we examined whether modifications in RNA editing profile of prime targets allow identifying disease-relevant blood biomarkers and evaluating suicide risk in patients. To address this question, we performed a clinical study to identify an RNA editing signature in blood of depressed patients with and without the history of suicide attempts. Patient’s samples were drawn in PAXgene tubes and analyzed on Alcediag’s proprietary RNA editing platform using next generation sequencing technology. In addition, gene expression analysis by quantitative PCR was performed. We generated a multivariate algorithm comprising various selected biomarkers to detect patients with a high risk to attempt suicide. We evaluated the diagnostic performance using the relative proportion of PDE8A mRNA editing at different sites and/or isoforms as well as the expression of PDE8A and the ADARs. The significance of these biomarkers for suicidality was evaluated using the area under the receiver-operating characteristic curve (AUC). The generated algorithm comprising the biomarkers was found to have strong diagnostic performances with high specificity and sensitivity. In conclusion, we developed tools to measure disease-specific biomarkers in blood samples of patients for identifying individuals at the greatest risk for future suicide attempts. This technology not only fosters patient management but is also suitable to predict the risk of drug-induced psychiatric side effects such as iatrogenic increase of suicidal ideas/behaviors.

Keywords: blood biomarker, next-generation-sequencing, RNA editing, suicide

Procedia PDF Downloads 237
195 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 142
194 Non-Invasive Evaluation of Patients After Percutaneous Coronary Revascularization. The Role of Cardiac Imaging

Authors: Abdou Elhendy

Abstract:

Numerous study have shown the efficacy of the percutaneous intervention (PCI) and coronary stenting in improving left ventricular function and relieving exertional angina. Furthermore, PCI remains the main line of therapy in acute myocardial infarction. Improvement of procedural techniques and new devices have resulted in an increased number of PCI in those with difficult and extensive lesions, multivessel disease as well as total occlusion. Immediate and late outcome may be compromised by acute thrombosis or the development of fibro-intimal hyperplasia. In addition, progression of coronary artery disease proximal or distal to the stent as well as in non-stented arteries is not uncommon. As a result, complications can occur, such as acute myocardial infarction, worsened heart failure or recurrence of angina. In a stent, restenosis can occur without symptoms or with atypical complaints rendering the clinical diagnosis difficult. Routine invasive angiography is not appropriate as a follow up tool due to associated risk and cost and the limited functional assessment. Exercise and pharmacologic stress testing are increasingly used to evaluate the myocardial function, perfusion and adequacy of revascularization. Information obtained by these techniques provide important clues regarding presence and severity of compromise in myocardial blood flow. Stress echocardiography can be performed in conjunction with exercise or dobutamine infusion. The diagnostic accuracy has been moderate, but the results provide excellent prognostic stratification. Adding myocardial contrast agents can improve imaging quality and allows assessment of both function and perfusion. Stress radionuclide myocardial perfusion imaging is an alternative to evaluate these patients. The extent and severity of wall motion and perfusion abnormalities observed during exercise or pharmacologic stress are predictors of survival and risk of cardiac events. According to current guidelines, stress echocardiography and radionuclide imaging are considered to have appropriate indication among patients after PCI who have cardiac symptoms and those who underwent incomplete revascularization. Stress testing is not recommended in asymptomatic patients, particularly early after revascularization, Coronary CT angiography is increasingly used and provides high sensitive for the diagnosis of coronary artery stenosis. Average sensitivity and specificity for the diagnosis of in stent stenosis in pooled data are 79% and 81%, respectively. Limitations include blooming artifacts and low feasibility in patients with small stents or thick struts. Anatomical and functional cardiac imaging modalities are corner stone for the assessment of patients after PCI and provide salient diagnostic and prognostic information. Current imaging techniques cans serve as gate keeper for coronary angiography, thus limiting the risk of invasive procedures to those who are likely to benefit from subsequent revascularization. The determination of which modality to apply requires careful identification of merits and limitation of each technique as well as the unique characteristic of each individual patient.

Keywords: coronary artery disease, stress testing, cardiac imaging, restenosis

Procedia PDF Downloads 143
193 Wetting Characterization of High Aspect Ratio Nanostructures by Gigahertz Acoustic Reflectometry

Authors: C. Virgilio, J. Carlier, P. Campistron, M. Toubal, P. Garnier, L. Broussous, V. Thomy, B. Nongaillard

Abstract:

Wetting efficiency of microstructures or nanostructures patterned on Si wafers is a real challenge in integrated circuits manufacturing. In fact, bad or non-uniform wetting during wet processes limits chemical reactions and can lead to non-complete etching or cleaning inside the patterns and device defectivity. This issue is more and more important with the transistors size shrinkage and concerns mainly high aspect ratio structures. Deep Trench Isolation (DTI) structures enabling pixels’ isolation in imaging devices are subject to this phenomenon. While low-frequency acoustic reflectometry principle is a well-known method for Non Destructive Test applications, we have recently shown that it is also well suited for nanostructures wetting characterization in a higher frequency range. In this paper, we present a high-frequency acoustic reflectometry characterization of DTI wetting through a confrontation of both experimental and modeling results. The acoustic method proposed is based on the evaluation of the reflection of a longitudinal acoustic wave generated by a 100 µm diameter ZnO piezoelectric transducer sputtered on the silicon wafer backside using MEMS technologies. The transducers have been fabricated to work at 5 GHz corresponding to a wavelength of 1.7 µm in silicon. The DTI studied structures, manufactured on the wafer frontside, are crossing trenches of 200 nm wide and 4 µm deep (aspect ratio of 20) etched into a Si wafer frontside. In that case, the acoustic signal reflection occurs at the bottom and at the top of the DTI enabling its characterization by monitoring the electrical reflection coefficient of the transducer. A Finite Difference Time Domain (FDTD) model has been developed to predict the behavior of the emitted wave. The model shows that the separation of the reflected echoes (top and bottom of the DTI) from different acoustic modes is possible at 5 Ghz. A good correspondence between experimental and theoretical signals is observed. The model enables the identification of the different acoustic modes. The evaluation of DTI wetting is then performed by focusing on the first reflected echo obtained through the reflection at Si bottom interface, where wetting efficiency is crucial. The reflection coefficient is measured with different water / ethanol mixtures (tunable surface tension) deposited on the wafer frontside. Two cases are studied: with and without PFTS hydrophobic treatment. In the untreated surface case, acoustic reflection coefficient values with water show that liquid imbibition is partial. In the treated surface case, the acoustic reflection is total with water (no liquid in DTI). The impalement of the liquid occurs for a specific surface tension but it is still partial for pure ethanol. DTI bottom shape and local pattern collapse of the trenches can explain these incomplete wetting phenomena. This high-frequency acoustic method sensitivity coupled with a FDTD propagative model thus enables the local determination of the wetting state of a liquid on real structures. Partial wetting states for non-hydrophobic surfaces or low surface tension liquids are then detectable with this method.

Keywords: wetting, acoustic reflectometry, gigahertz, semiconductor

Procedia PDF Downloads 317
192 Upon Poly(2-Hydroxyethyl Methacrylate-Co-3, 9-Divinyl-2, 4, 8, 10-Tetraoxaspiro (5.5) Undecane) as Polymer Matrix Ensuring Intramolecular Strategies for Further Coupling Applications

Authors: Aurica P. Chiriac, Vera Balan, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Elena Stoleru, Loredana E. Nita, Iordana Neamtu, Alina Diaconu, Liliana Mititelu-Tartau

Abstract:

The interest for studying ‘smart’ materials is entirely justified and in this context were realized investigations on poly(2-hydroxyethylmethacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane), which is a macromolecular compound with sensibility at pH and temperature, gel formation capacity, binding properties, amphilicity, good oxidative and thermal stability. Physico-chemical characteristics in terms of the molecular weight, temperature-sensitive abilities and thermal stability, as well rheological, dielectric and spectroscopic properties were evaluated in correlation with further coupling capabilities. Differential scanning calorimetry investigation indicated Tg at 36.6 °C and a melting point at Tm=72.8°C, for the studied copolymer, and up to 200oC two exothermic processes (at 99.7°C and 148.8°C) were registered with losing weight of about 4 %, respective 19.27%, which indicate just processes of thermal decomposition (and not phenomena of thermal transition) owing to scission of the functional groups and breakage of the macromolecular chains. At the same time, the rheological studies (rotational tests) confirmed the non-Newtonian shear-thinning fluid behavior of the copolymer solution. The dielectric properties of the copolymer have been evaluated in order to investigate the relaxation processes and two relaxation processes under Tg value were registered and attributed to localized motions of polar groups from side chain macromolecules, or parts of them, without disturbing the main chains. According to literature and confirmed as well by our investigations, β-relaxation is assigned with the rotation of the ester side group and the γ-relaxation corresponds to the rotation of hydroxy- methyl side groups. The fluorescence spectroscopy confirmed the copolymer structure, the spiroacetal moiety getting an axial conformation, more stable, with lower energy, able for specific interactions with molecules from environment, phenomena underlined by different shapes of the emission spectra of the copolymer. Also, the copolymer was used as template for indomethacin incorporation as model drug, and the biocompatible character of the complex was confirmed. The release behavior of the bioactive compound was dependent by the copolymer matrix composition, the increasing of 3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane comonomer amount attenuating the drug release. At the same time, the in vivo studies did not show significant differences of leucocyte formula elements, GOT, GPT and LDH levels, nor immune parameters (OC, PC, and BC) between control mice group and groups treated just with copolymer samples, with or without drug, data attesting the biocompatibility of the polymer samples. The investigation of the physico-chemical characteristics of poly(2-hydrxyethyl methacrylate-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5) undecane) in terms of temperature-sensitive abilities, rheological and dielectrical properties, are bringing useful information for further specific use of this polymeric compound.

Keywords: bioapplications, dielectric and spectroscopic properties, dual sensitivity at pH and temperature, smart materials

Procedia PDF Downloads 270
191 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 111
190 Automatic Identification of Pectoral Muscle

Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina

Abstract:

Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.

Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle

Procedia PDF Downloads 335
189 'iTheory': Mobile Way to Music Fundamentals

Authors: Marina Karaseva

Abstract:

The beginning of our century became a new digital epoch in the educational situation. Last decade the newest stage of this process had been initialized by the touch-screen mobile devices with program applications for them. The touch possibilities for learning fundamentals of music are of especially importance for music majors. The phenomenon of touching, firstly, makes it realistic to play on the screen as on music instrument, secondly, helps students to learn music theory while listening in its sound elements by music ear. Nowadays we can detect several levels of such mobile applications: from the basic ones devoting to the elementary music training such as intervals and chords recognition, to the more advanced applications which deal with music perception of non-major and minor modes, ethnic timbres, and complicated rhythms. The main purpose of the proposed paper is to disclose the main tendencies in this process and to demonstrate the most innovative features of music theory applications on the base of iOS and Android systems as the most common used. Methodological recommendations how to use these digital material musicologically will be done for the professional music education of different levels. These recommendations are based on more than ten year ‘iTheory’ teaching experience of the author. In this paper, we try to logically classify all types of ‘iTheory’mobile applications into several groups, according to their methodological goals. General concepts given below will be demonstrated in concrete examples. The most numerous group of programs is formed with simulators for studying notes with audio-visual links. There are link-pair types as follows: sound — musical notation which may be used as flashcards for studying words and letters, sound — key, sound — string (basically, guitar’s). The second large group of programs is programs-tests containing a game component. As a rule, their basis is made with exercises on ear identification and reconstruction by voice: sounds and intervals on their sounding — harmonical and melodical, music modes, rhythmic patterns, chords, selected instrumental timbres. Some programs are aimed at an establishment of acoustical communications between concepts of the musical theory and their musical embodiments. There are also programs focused on progress of operative musical memory (with repeating of sounding phrases and their transposing in a new pitch), as well as on perfect pitch training In addition a number of programs improvisation skills have been developed. An absolute pitch-system of solmisation is a common base for mobile programs. However, it is possible to find also the programs focused on the relative pitch system of solfegе. In App Store and Google Play Market online store there are also many free programs-simulators of musical instruments — piano, guitars, celesta, violin, organ. These programs may be effective for individual and group exercises in ear training or composition classes. Great variety and good sound quality of these programs give now a unique opportunity to musicians to master their music abilities in a shorter time. That is why such teaching material may be a way to effective study of music theory.

Keywords: ear training, innovation in music education, music theory, mobile devices

Procedia PDF Downloads 190
188 Designing Disaster Resilience Research in Partnership with an Indigenous Community

Authors: Suzanne Phibbs, Christine Kenney, Robyn Richardson

Abstract:

The Sendai Framework for Disaster Risk Reduction called for the inclusion of indigenous people in the design and implementation of all hazard policies, plans, and standards. Ensuring that indigenous knowledge practices were included alongside scientific knowledge about disaster risk was also a key priority. Indigenous communities have specific knowledge about climate and natural hazard risk that has been developed over an extended period of time. However, research within indigenous communities can be fraught with issues such as power imbalances between the researcher and researched, the privileging of researcher agendas over community aspirations, as well as appropriation and/or inappropriate use of indigenous knowledge. This paper documents the process of working alongside a Māori community to develop a successful community-led research project. Research Design: This case study documents the development of a qualitative community-led participatory project. The community research project utilizes a kaupapa Māori research methodology which draws upon Māori research principles and concepts in order to generate knowledge about Māori resilience. The research addresses a significant gap in the disaster research literature relating to indigenous knowledge about collective hazard mitigation practices as well as resilience in rurally isolated indigenous communities. The research was designed in partnership with the Ngāti Raukawa Northern Marae Collective as well as Ngā Wairiki Ngāti Apa (a group of Māori sub-tribes who are located in the same region) and will be conducted by Māori researchers utilizing Māori values and cultural practices. The research project aims and objectives, for example, are based on themes that were identified as important to the Māori community research partners. The research methodology and methods were also negotiated with and approved by the community. Kaumātua (Māori elders) provided cultural and ethical guidance over the proposed research process and will continue to provide oversight over the conduct of the research. Purposive participant recruitment will be facilitated with support from local Māori community research partners, utilizing collective marae networks and snowballing methods. It is envisaged that Māori participants’ knowledge, experiences and views will be explored using face-to-face communication research methods such as workshops, focus groups and/or semi-structured interviews. Interviews or focus groups may be held in English and/or Te Reo (Māori language) to enhance knowledge capture. Analysis, knowledge dissemination, and co-authorship of publications will be negotiated with the Māori community research partners. Māori knowledge shared during the research will constitute participants’ intellectual property. New knowledge, theory, frameworks, and practices developed by the research will be co-owned by Māori, the researchers, and the host academic institution. Conclusion: An emphasis on indigenous knowledge systems within the Sendai Framework for Disaster Risk Reduction risks the appropriation and misuse of indigenous experiences of disaster risk identification, mitigation, and response. The research protocol underpinning this project provides an exemplar of collaborative partnership in the development and implementation of an indigenous project that has relevance to policymakers, academic researchers, other regions with indigenous communities and/or local disaster risk reduction knowledge practices.

Keywords: community resilience, indigenous disaster risk reduction, Maori, research methods

Procedia PDF Downloads 108
187 The Effect of Metabolites of Fusarium solani on the Activity of the PR-Proteins (Chitinase, β-1,3-Glucanase and Peroxidases) of Potato Tubers

Authors: A. K. Tursunova, O. V. Chebonenko, A. Zh. Amirkulova, A. O. Abaildayev, O. A. Sapko, Y. M. Dyo, A. Sh. Utarbaeva

Abstract:

Fusarium solani and its variants cause root and stem rot of plants. Dry rot is the most common disease of potato tubers during storage. The causative agents of fusariosis in contact with plants behave as antagonists, growth stimulants or parasites. The diversity of host-parasite relationships is explained by the parasite’s ability to produce a wide spectrum of biologically active compounds including toxins, enzymes, oligosaccharides, antibiotic substances, enniatins and gibberellins. Many of these metabolites contribute to the creation of compatible relations; others behave as elicitors, inducing various protective responses in plants. An important part of the strategy for developing plant resistance against pathogens is the activation of protein synthesis to produce protective ‘pathogenesis-related’ proteins. The family of PR-proteins known to confer the most protective response is chitinases (EC 3.2.1.14, Cht) and β-1,3-glucanases (EC 3.2.1.39, Glu). PR-proteins also include a large multigene family of peroxidases (EC 1.11.1.7, Pod), and increased activity of Pod and expression of the Pod genes leads to the development of resistance to a broad class of pathogens. Despite intensive research on the role of PR-proteins, the question of their participation in the mechanisms of formation of the F.solani–S.tuberosum pathosуstem is not sufficiently studied. Our aim was to investigate the effect of different classes of F. solani metabolites on the activity of chitinase, β-1,3-glucanases and peroxidases in tubers of Solanum tuberosum. Metabolite culture filtrate (CF) and cytoplasmic components were fractionated by extraction of the mycelium with organic solvents, salting out techniques, dialysis, column chromatography and ultrafiltration. Protein, lipid, carbohydrate and polyphenolic fractions of fungal metabolites were derived. Using enzymatic hydrolysis we obtained oligo glycans from fungal cell walls with different molecular weights. The activity of the metabolites was tested using potato tuber discs (d = 16mm, h = 5mm). The activity of PR-proteins of tubers was analyzed in a time course of 2–24 hours. The involvement of the analysed metabolites in the modulation of both early non-specific and late related to pathogenesis reactions was demonstrated. The most effective inducer was isolated from the CF (fraction of total phenolic compounds including naphtazarins). Induction of PR-activity by this fraction was: chitinase - 340-360%, glucanase - 435-450%, soluble forms of peroxidase - 400-560%, related forms of peroxidase - 215-237%. High-inducing activity was observed by the chloroform and acetonitrile extracts of the mycelium (induction of chitinase and glucanase activity was 176-240%, of soluble and bound forms of peroxidase - 190-400%). The fraction of oligo glycans mycelium cell walls of 1.2 kDa induced chitinase and β-1,3-glucanase to 239-320%; soluble forms and related peroxidase to 198-426%. Oligo glycans cell walls of 5-10 kDa had a weak suppressor effect - chitinase (21-25%) and glucanase (25-28%) activity; had no effect on soluble forms of peroxidase, but induced to 250-270% activity related forms. The CF polysaccharides of 8.5 kDa and 3.1 kDa inhibited synchronously the glucanase and chitinase specific response in step (after 24 hours at 42-50%) and the step response induced nonspecific peroxidase activity: soluble forms 4.8 -5.2 times, associated forms 1.4-1.6 times.

Keywords: fusarium solani, PR-proteins, peroxidase, solanum tuberosum

Procedia PDF Downloads 190
186 Evaluation of Antibiotic Resistance and Extended-Spectrum β-Lactamases Production Rates of Gram Negative Rods in a University Research and Practice Hospital, 2012-2015

Authors: Recep Kesli, Cengiz Demir, Onur Turkyilmaz, Hayriye Tokay

Abstract:

Objective: Gram-negative rods are a large group of bacteria, and include many families, genera, and species. Most clinical isolates belong to the family Enterobacteriaceae. Resistance due to the production of extended-spectrum β-lactamases (ESBLs) is a difficulty in the handling of Enterobacteriaceae infections, but other mechanisms of resistance are also emerging, leading to multidrug resistance and threatening to create panresistant species. We aimed in this study to evaluate resistance rates of Gram-negative rods bacteria isolated from clinical specimens in Microbiology Laboratory, Afyon Kocatepe University, ANS Research and Practice Hospital, between October 2012 and September 2015. Methods: The Gram-negative rods strains were identified by conventional methods and VITEK 2 automated identification system (bio-Mérieux, Marcy l’etoile, France). Antibiotic resistance tests were performed by both the Kirby-Bauer disk-diffusion and automated Antimicrobial Susceptibility Testing (AST, bio-Mérieux, Marcy l’etoile, France) methods. Disk diffusion results were evaluated according to the standards of Clinical and Laboratory Standards Institute (CLSI). Results: Of the totally isolated 1.701 Enterobacteriaceae strains 1434 (84,3%) were Klebsiella pneumoniae, 171 (10%) were Enterobacter spp., 96 (5.6%) were Proteus spp., and 639 Nonfermenting gram negatives, 477 (74.6%) were identified as Pseudomonas aeruginosa, 135 (21.1%) were Acinetobacter baumannii and 27 (4.3%) were Stenotrophomonas maltophilia. The ESBL positivity rate of the totally studied Enterobacteriaceae group were 30.4%. Antibiotic resistance rates for Klebsiella pneumoniae were as follows: amikacin 30.4%, gentamicin 40.1%, ampicillin-sulbactam 64.5%, cefepime 56.7%, cefoxitin 35.3%, ceftazidime 66.8%, ciprofloxacin 65.2%, ertapenem 22.8%, imipenem 20.5%, meropenem 20.5 %, and trimethoprim-sulfamethoxazole 50.1%, and for 114 Enterobacter spp were detected as; amikacin 26.3%, gentamicin 31.5%, cefepime 26.3%, ceftazidime 61.4%, ciprofloxacin 8.7%, ertapenem 8.7%, imipenem 12.2%, meropenem 12.2%, and trimethoprim-sulfamethoxazole 19.2 %. Resistance rates for Proteus spp. were: 24,3% meropenem, 26.2% imipenem, 20.2% amikacin 10.5% cefepim, 33.3% ciprofloxacin and levofloxacine, 31.6% ceftazidime, 20% ceftriaxone, 15.2% gentamicin, 26.6% amoxicillin-clavulanate, and 26.2% trimethoprim-sulfamethoxale. Resistance rates of P. aeruginosa was found as follows: Amikacin 32%, gentamicin 42 %, imipenem 43%, merpenem 43%, ciprofloxacin 50%, levofloxacin 52%, cefepim 38%, ceftazidim 63%, piperacillin/tacobactam 85%, for Acinetobacter baumannii; Amikacin 53.3%, gentamicin 56.6 %, imipenem 83%, merpenem 86%, ciprofloxacin 100%, ceftazidim 100%, piperacillin/tacobactam 85 %, colisitn 0 %, and for S. malthophilia; levofloxacin 66.6 % and trimethoprim/sulfamethoxozole 0 %. Conclusions: This study showed that resistance in Gram-negative rods was a serious clinical problem in our hospital and suggested the need to perform typification of the isolated bacteria with susceptibility testing regularly in the routine laboratory procedures. This application guided to empirical antibiotic treatment choices truly, as a consequence of the reality that each hospital shows different resistance profiles.

Keywords: antibiotic resistance, gram negative rods, ESBL, VITEK 2

Procedia PDF Downloads 314
185 An Epidemiological Study on Cutaneous Melanoma, Basocellular and Epidermoid Carcinomas Diagnosed in a Sunny City in Southeast Brazil in a Five-Year Period

Authors: Carolina L. Cerdeira, Julia V. F. Cortes, Maria E. V. Amarante, Gersika B. Santos

Abstract:

Skin cancer is the most common cancer in several parts of the world; in a tropical country like Brazil, the situation isn’t different. The Brazilian population is exposed to high levels of solar radiation, increasing the risk of developing cutaneous carcinoma. Aimed at encouraging prevention measures and the early diagnosis of these tumors, a study was carried out that analyzed data on cutaneous melanomas, basal cell, and epidermoid carcinomas, using as primary data source the medical records of 161 patients registered in one pathology service, which performs skin biopsies in a city of Minas Gerais, Brazil. All patients diagnosed with skin cancer at this service from January 2015 to December 2019 were included. The incidence of skin carcinoma cases was correlated with the identification of histological type, sex, age group, and topographic location. Correlation between variables was verified by Fisher's exact test at a nominal significance level of 5%, with statistical analysis performed by R® software. A significant association was observed between age group and type of cancer (p=0.0085); age group and sex (0.0298); and type of cancer and body region affected (p < 0.01). Those 161 cases analyzed comprised 93 basal cell carcinomas, 66 epidermoid carcinomas, and only two cutaneous melanomas. In the group aged 19 to 30 years, the epidermoid form was most prevalent; from 31 to 45 and from 46 to 59 years, the basal cell prevailed; in 60-year-olds or over, both types had higher frequencies. Associating age group and sex, in groups aged 18 to 30 and 46 to 59 years, women were most affected. In the 31-to 45-year-old group, men predominated. There was a gender balance in the age group 60-year-olds or over. As for topography, there was a high prevalence in the head and neck, followed by upper limbs. Relating histological type and topography, there was a prevalence of basal cell and epidermoid carcinomas in the head and neck. In the chest, the basal cell form was most prevalent; in upper limbs, the epidermoid form prevailed. Cutaneous melanoma affected only the chest and upper limbs. About 82% of patients 60-year-olds or over had head and neck cancer; from 46 to 59 and 60-year-olds or over, the head and neck region and upper limbs were predominantly affected; the distribution was balanced in the 31-to 45-year-old group. In conclusion, basal cell carcinoma was predominant, whereas cutaneous melanoma was the rarest among the types analyzed. Patients 60-year-olds or over were most affected, showing gender balance. In young adults, there was a prevalence of the epidermoid form; in middle-aged patients, basal cell carcinoma was predominant; in the elderly, both forms presented with higher frequencies. There was a higher incidence of head and neck cancers, followed by malignancies affecting the upper limbs. The epidermoid type manifested significantly in the upper limbs. Body regions such as the thorax and lower limbs were less affected, which is justified by the lower exposure of these areas to incident solar radiation.

Keywords: basal cell carcinoma, cutaneous melanoma, skin cancer, squamous cell carcinoma, topographic location

Procedia PDF Downloads 110
184 Wind Tunnel Tests on Ground-Mounted and Roof-Mounted Photovoltaic Array Systems

Authors: Chao-Yang Huang, Rwey-Hua Cherng, Chung-Lin Fu, Yuan-Lung Lo

Abstract:

Solar energy is one of the replaceable choices to reduce the CO2 emission produced by conventional power plants in the modern society. As an island which is frequently visited by strong typhoons and earthquakes, it is an urgent issue for Taiwan to make an effort in revising the local regulations to strengthen the safety design of photovoltaic systems. Currently, the Taiwanese code for wind resistant design of structures does not have a clear explanation on photovoltaic systems, especially when the systems are arranged in arrayed format. Furthermore, when the arrayed photovoltaic system is mounted on the rooftop, the approaching flow is significantly altered by the building and led to different pressure pattern in the different area of the photovoltaic system. In this study, L-shape arrayed photovoltaic system is mounted on the ground of the wind tunnel and then mounted on the building rooftop. The system is consisted of 60 PV models. Each panel model is equivalent to a full size of 3.0 m in depth and 10.0 m in length. Six pressure taps are installed on the upper surface of the panel model and the other six are on the bottom surface to measure the net pressures. Wind attack angle is varied from 0° to 360° in a 10° interval for the worst concern due to wind direction. The sampling rate of the pressure scanning system is set as high enough to precisely estimate the peak pressure and at least 20 samples are recorded for good ensemble average stability. Each sample is equivalent to 10-minute time length in full scale. All the scale factors, including timescale, length scale, and velocity scale, are properly verified by similarity rules in low wind speed wind tunnel environment. The purpose of L-shape arrayed system is for the understanding the pressure characteristics at the corner area. Extreme value analysis is applied to obtain the design pressure coefficient for each net pressure. The commonly utilized Cook-and-Mayne coefficient, 78%, is set to the target non-exceedance probability for design pressure coefficients under Gumbel distribution. Best linear unbiased estimator method is utilized for the Gumbel parameter identification. Careful time moving averaging method is also concerned in data processing. Results show that when the arrayed photovoltaic system is mounted on the ground, the first row of the panels reveals stronger positive pressure than that mounted on the rooftop. Due to the flow separation occurring at the building edge, the first row of the panels on the rooftop is most in negative pressures; the last row, on the other hand, shows positive pressures because of the flow reattachment. Different areas also have different pressure patterns, which corresponds well to the regulations in ASCE7-16 describing the area division for design values. Several minor observations are found according to parametric studies, such as rooftop edge effect, parapet effect, building aspect effect, row interval effect, and so on. General comments are then made for the proposal of regulation revision in Taiwanese code.

Keywords: aerodynamic force coefficient, ground-mounted, roof-mounted, wind tunnel test, photovoltaic

Procedia PDF Downloads 124
183 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters

Authors: Trevor C. Brown, David J. Miron

Abstract:

Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.

Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics

Procedia PDF Downloads 215
182 Effectiveness of Simulation Resuscitation Training to Improve Self-Efficacy of Physicians and Nurses at Aga Khan University Hospital in Advanced Cardiac Life Support Courses Quasi-Experimental Study Design

Authors: Salima R. Rajwani, Tazeen Ali, Rubina Barolia, Yasmin Parpio, Nasreen Alwani, Salima B. Virani

Abstract:

Introduction: Nurses and physicians have a critical role in initiating lifesaving interventions during cardiac arrest. It is important that timely delivery of high quality Cardio Pulmonary Resuscitation (CPR) with advanced resuscitation skills and management of cardiac arrhythmias is a key dimension of code during cardiac arrest. It will decrease the chances of patient survival if the healthcare professionals are unable to initiate CPR timely. Moreover, traditional training will not prepare physicians and nurses at a competent level and their knowledge level declines over a period of time. In this regard, simulation training has been proven to be effective in promoting resuscitation skills. Simulation teaching learning strategy improves knowledge level, and skills performance during resuscitation through experiential learning without compromising patient safety in real clinical situations. The purpose of the study is to evaluate the effectiveness of simulation training in Advanced Cardiac Life Support Courses by using the selfefficacy tool. Methods: The study design is a quantitative research design and non-randomized quasi-experimental study design. The study examined the effectiveness of simulation through self-efficacy in two instructional methods; one is Medium Fidelity Simulation (MFS) and second is Traditional Training Method (TTM). The sample size was 220. Data was compiled by using the SPSS tool. The standardized simulation based training increases self-efficacy, knowledge, and skills and improves the management of patients in actual resuscitation. Results: 153 students participated in study; CG: n = 77 and EG: n = 77. The comparison was done between arms in pre and post-test. (F value was 1.69, p value is <0.195 and df was 1). There was no significant difference between arms in the pre and post-test. The interaction between arms was observed and there was no significant difference in interaction between arms in the pre and post-test. (F value was 0.298, p value is <0.586 and df is 1. However, the results showed self-efficacy scores were significantly higher within experimental group in post-test in advanced cardiac life support resuscitation courses as compared to Traditional Training Method (TTM) and had overall (p <0.0001) and F value was 143.316 (mean score was 45.01 and SD was 9.29) verses pre-test result showed (mean score was 31.15 and SD was 12.76) as compared to TTM in post-test (mean score was 29.68 and SD was 14.12) verses pre-test result showed (mean score was 42.33 and SD was 11.39). Conclusion: The standardized simulation-based training was conducted in the safe learning environment in Advanced Cardiac Life Suport Courses and physicians and nurses benefited from self-confidence, early identification of life-threatening scenarios, early initiation of CPR, and provides high-quality CPR, timely administration of medication and defibrillation, appropriate airway management, rhythm analysis and interpretation, and Return of Spontaneous Circulation (ROSC), team dynamics, debriefing, and teaching and learning strategies that will improve the patient survival in actual resuscitation.

Keywords: advanced cardiac life support, cardio pulmonary resuscitation, return of spontaneous circulation, simulation

Procedia PDF Downloads 62
181 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories

Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari

Abstract:

One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.

Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive

Procedia PDF Downloads 157
180 Development of a Conceptual Framework for Supply Chain Management Strategies Maximizing Resilience in Volatile Business Environments: A Case of Ventilator Challenge UK

Authors: Elena Selezneva

Abstract:

Over the last two decades, an unprecedented growth in uncertainty and volatility in all aspects of the business environment has caused major global supply chain disruptions and malfunctions. The effects of one failed company in a supply chain can ripple up and down the chain, causing a number of entities or an entire supply chain to collapse. The complicating factor is that an increasingly unstable and unpredictable business environment fuels the growing complexity of global supply chain networks. That makes supply chain operations extremely unpredictable and hard to manage with the established methods and strategies. It has caused the premature demise of many companies around the globe as they could not withstand or adapt to the storm of change. Solutions to this problem are not easy to come by. There is a lack of new empirically tested theories and practically viable supply chain resilience strategies. The mainstream organizational approach to managing supply chain resilience is rooted in well-established theories developed in the 1960-1980s. However, their effectiveness is questionable in currently extremely volatile business environments. The systems thinking approach offers an alternative view of supply chain resilience. Still, it is very much in the development stage. The aim of this explorative research is to investigate supply chain management strategies that are successful in taming complexity in volatile business environments and creating resilience in supply chains. The design of this research methodology was guided by an interpretivist paradigm. A literature review informed the selection of the systems thinking approach to supply chain resilience. Therefore, an explorative single case study of Ventilator Challenge UK was selected as a case study for its extremely resilient performance of its supply chain during a period of national crisis. Ventilator Challenge UK is intensive care ventilators supply project for the NHS. It ran for 3.5 months and finished in 2020. The participants moved on with their lives, and most of them are not employed by the same organizations anymore. Therefore, the study data includes documents, historical interviews, live interviews with participants, and social media postings. The data analysis was accomplished in two stages. First, data were thematically analyzed. In the second stage, pattern matching and pattern identification were used to identify themes that formed the findings of the research. The findings from the Ventilator Challenge UK case study supply management practices demonstrated all the features of an adaptive dynamic system. They cover all the elements of supply chain and employ an entire arsenal of adaptive dynamic system strategies enabling supply chain resilience. Also, it is not a simple sum of parts and strategies. Bonding elements and connections between the components of a supply chain and its environment enabled the amplification of resilience in the form of systemic emergence. Enablers are categorized into three subsystems: supply chain central strategy, supply chain operations, and supply chain communications. Together, these subsystems and their interconnections form the resilient supply chain system framework conceptualized by the author.

Keywords: enablers of supply chain resilience, supply chain resilience strategies, systemic approach in supply chain management, resilient supply chain system framework, ventilator challenge UK

Procedia PDF Downloads 64
179 Incorporating Spatial Transcriptome Data into Ligand-Receptor Analyses to Discover Regional Activation in Cells

Authors: Eric Bang

Abstract:

Interactions between receptors and ligands are crucial for many essential biological processes, including neurotransmission and metabolism. Ligand-receptor analyses that examine cell behavior and interactions often utilize cell type-specific RNA expressions from single-cell RNA sequencing (scRNA-seq) data. Using CellPhoneDB, a public repository consisting of ligands, receptors, and ligand-receptor interactions, the cell-cell interactions were explored in a specific scRNA-seq dataset from kidney tissue and portrayed the results with dot plots and heat maps. Depending on the type of cell, each ligand-receptor pair was aligned with the interacting cell type and calculated the positori probabilities of these associations, with corresponding P values reflecting average expression values between the triads and their significance. Using single-cell data (sample kidney cell references), genes in the dataset were cross-referenced with ones in the existing CellPhoneDB dataset. For example, a gene such as Pleiotrophin (PTN) present in the single-cell data also needed to be present in the CellPhoneDB dataset. Using the single-cell transcriptomics data via slide-seq and reference data, the CellPhoneDB program defines cell types and plots them in different formats, with the two main ones being dot plots and heat map plots. The dot plot displays derived measures of the cell to cell interaction scores and p values. For the dot plot, each row shows a ligand-receptor pair, and each column shows the two interacting cell types. CellPhoneDB defines interactions and interaction levels from the gene expression level, so since the p-value is on a -log10 scale, the larger dots represent more significant interactions. By performing an interaction analysis, a significant interaction was discovered for myeloid and T-cell ligand-receptor pairs, including those between Secreted Phosphoprotein 1 (SPP1) and Fibronectin 1 (FN1), which is consistent with previous findings. It was proposed that an effective protocol would involve a filtration step where cell types would be filtered out, depending on which ligand-receptor pair is activated in that part of the tissue, as well as the incorporation of the CellPhoneDB data in a streamlined workflow pipeline. The filtration step would be in the form of a Python script that expedites the manual process necessary for dataset filtration. Being in Python allows it to be integrated with the CellPhoneDB dataset for future workflow analysis. The manual process involves filtering cell types based on what ligand/receptor pair is activated in kidney cells. One limitation of this would be the fact that some pairings are activated in multiple cells at a time, so the manual manipulation of the data is reflected prior to analysis. Using the filtration script, accurate sorting is incorporated into the CellPhoneDB database rather than waiting until the output is produced and then subsequently applying spatial data. It was envisioned that this would reveal wherein the cell various ligands and receptors are interacting with different cell types, allowing for easier identification of which cells are being impacted and why, for the purpose of disease treatment. The hope is this new computational method utilizing spatially explicit ligand-receptor association data can be used to uncover previously unknown specific interactions within kidney tissue.

Keywords: bioinformatics, Ligands, kidney tissue, receptors, spatial transcriptome

Procedia PDF Downloads 124