Search results for: interface treatment form
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14957

Search results for: interface treatment form

677 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis

Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah

Abstract:

Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.

Keywords: apnea, meta-analysis, hypoglossal, stimulation

Procedia PDF Downloads 115
676 Modelling the Art Historical Canon: The Use of Dynamic Computer Models in Deconstructing the Canon

Authors: Laura M. F. Bertens

Abstract:

There is a long tradition of visually representing the art historical canon, in schematic overviews and diagrams. This is indicative of the desire for scientific, ‘objective’ knowledge of the kind (seemingly) produced in the natural sciences. These diagrams will, however, always retain an element of subjectivity and the modelling methods colour our perception of the represented information. In recent decades visualisations of art historical data, such as hand-drawn diagrams in textbooks, have been extended to include digital, computational tools. These tools significantly increase modelling strength and functionality. As such, they might be used to deconstruct and amend the very problem caused by traditional visualisations of the canon. In this paper, the use of digital tools for modelling the art historical canon is studied, in order to draw attention to the artificial nature of the static models that art historians are presented with in textbooks and lectures, as well as to explore the potential of digital, dynamic tools in creating new models. To study the way diagrams of the canon mediate the represented information, two modelling methods have been used on two case studies of existing diagrams. The tree diagram Stammbaum der neudeutschen Kunst (1823) by Ferdinand Olivier has been translated to a social network using the program Visone, and the famous flow chart Cubism and Abstract Art (1936) by Alfred Barr has been translated to an ontological model using Protégé Ontology Editor. The implications of the modelling decisions have been analysed in an art historical context. The aim of this project has been twofold. On the one hand the translation process makes explicit the design choices in the original diagrams, which reflect hidden assumptions about the Western canon. Ways of organizing data (for instance ordering art according to artist) have come to feel natural and neutral and implicit biases and the historically uneven distribution of power have resulted in underrepresentation of groups of artists. Over the last decades, scholars from fields such as Feminist Studies, Postcolonial Studies and Gender Studies have considered this problem and tried to remedy it. The translation presented here adds to this deconstruction by defamiliarizing the traditional models and analysing the process of reconstructing new models, step by step, taking into account theoretical critiques of the canon, such as the feminist perspective discussed by Griselda Pollock, amongst others. On the other hand, the project has served as a pilot study for the use of digital modelling tools in creating dynamic visualisations of the canon for education and museum purposes. Dynamic computer models introduce functionalities that allow new ways of ordering and visualising the artworks in the canon. As such, they could form a powerful tool in the training of new art historians, introducing a broader and more diverse view on the traditional canon. Although modelling will always imply a simplification and therefore a distortion of reality, new modelling techniques can help us get a better sense of the limitations of earlier models and can provide new perspectives on already established knowledge.

Keywords: canon, ontological modelling, Protege Ontology Editor, social network modelling, Visone

Procedia PDF Downloads 127
675 Is Brain Death Reversal Possible in Near Future: Intrathecal Sodium Nitroprusside (SNP) Superfusion in Brain Death Patients=The 10,000 Fold Effect

Authors: Vinod Kumar Tewari, Mazhar Husain, Hari Kishan Das Gupta

Abstract:

Background: Primary or secondary brain death is also accompanied with vasospasm of the perforators other than tissue disruption & further exaggerates the anoxic damage, in the form of neuropraxia. In normal conditions the excitatory impulse propagates as anterograde neurotransmission (ANT) and at the level of synapse, glutamate activates NMDA receptors on postsynaptic membrane. Nitric oxide (NO) is produced by Nitric oxide Synthetase (NOS) in postsynaptic dendride or cell body and travels backwards across a chemical synapse to bind to the axon terminal of a presynaptic neuron for regulation of ANT this process is called as the retrograde neurotransmission (RNT). Thus the primary function of NO is RNT and the purpose of RNT is regulation of chemical neurotransmission at synapse. For this reason, RNT allows neural circuits to create feedback loops. The haem is the ligand binding site of NO receptor (sGC) at presynaptic membrane. The affinity of haem exhibits > 10,000-fold excess for NO than Oxygen (THE 10,000 FOLD EFFECT). In pathological conditions ANT, normal synaptic activity including RNT is absent. NO donors like sodium nitroprusside (SNP) releases NO by activating NOS at the level of postsynaptic area. NO now travels backwards across a chemical synapse to bind to the haem of NO receptor at axon terminal of a presynaptic neuron as in normal condition. NO now acts as impulse generator (at presynaptic membrane) thus bypasses the normal ANT. Also the arteriolar perforators are having Nitric Oxide Synthetase (NOS) at the adventitial side (outer border) on which sodium nitroprusside (SNP) acts; causing release of Nitric Oxide (NO) which vasodilates the perforators causing gush of blood in brain’s tissue and reversal of brain death. Objective: In brain death cases we only think for various transplantations but this study being a pilot study reverses some criteria of brain death by vasodilating the arteriolar perforators. To study the effect of intrathecal sodium nitroprusside (IT SNP) in cases of brain death in which: 1. Retrograde transmission = assessed by the hyperacute timings of reversal 2. The arteriolar perforator vasodilatation caused by NO and the maintenance of reversal of brain death reversal. Methods: 35 year old male, who became brain death after head injury and has not shown any signs of improvement after every maneuver for 6 hours, a single superfusion done by SNP via transoptic canal route for quadrigeminal cistern and cisternal puncture for IV ventricular with SNP done. Results: He showed spontaneous respiration (7 bouts) with TCD studies showing start of pulsations of various branches of common carotid arteries. Conclusions: In future we can give this SNP via transoptic canal route and in IV ventricle before declaring the body to be utilized for transplantations or dead or in broader way we can say that in near future it is possible to revert back from brain death or we have to modify our criterion.

Keywords: brain death, intrathecal sodium nitroprusside, TCD studies, perforators, vasodilatations, retrograde transmission, 10, 000 fold effect

Procedia PDF Downloads 403
674 Transport of Inertial Finite-Size Floating Plastic Pollution by Ocean Surface Waves

Authors: Ross Calvert, Colin Whittaker, Alison Raby, Alistair G. L. Borthwick, Ton S. van den Bremer

Abstract:

Large concentrations of plastic have polluted the seas in the last half century, with harmful effects on marine wildlife and potentially to human health. Plastic pollution will have lasting effects because it is expected to take hundreds or thousands of years for plastic to decay in the ocean. The question arises how waves transport plastic in the ocean. The predominant motion induced by waves creates ellipsoid orbits. However, these orbits do not close, resulting in a drift. This is defined as Stokes drift. If a particle is infinitesimally small and the same density as water, it will behave exactly as the water does, i.e., as a purely Lagrangian tracer. However, as the particle grows in size or changes density, it will behave differently. The particle will then have its own inertia, the fluid will exert drag on the particle, because there is relative velocity, and it will rise or sink depending on the density and whether it is on the free surface. Previously, plastic pollution has all been considered to be purely Lagrangian. However, the steepness of waves in the ocean is small, normally about α = k₀a = 0.1 (where k₀ is the wavenumber and a is the wave amplitude), this means that the mean drift flows are of the order of ten times smaller than the oscillatory velocities (Stokes drift is proportional to steepness squared, whilst the oscillatory velocities are proportional to the steepness). Thus, the particle motion must have the forces of the full motion, oscillatory and mean flow, as well as a dynamic buoyancy term to account for the free surface, to determine whether inertia is important. To track the motion of a floating inertial particle under wave action requires the fluid velocities, which form the forcing, and the full equations of motion of a particle to be solved. Starting with the equation of motion of a sphere in unsteady flow with viscous drag. Terms can added then be added to the equation of motion to better model floating plastic: a dynamic buoyancy to model a particle floating on the free surface, quadratic drag for larger particles and a slope sliding term. Using perturbation methods to order the equation of motion into sequentially solvable parts allows a parametric equation for the transport of inertial finite-sized floating particles to be derived. This parametric equation can then be validated using numerical simulations of the equation of motion and flume experiments. This paper presents a parametric equation for the transport of inertial floating finite-size particles by ocean waves. The equation shows an increase in Stokes drift for larger, less dense particles. The equation has been validated using numerical solutions of the equation of motion and laboratory flume experiments. The difference in the particle transport equation and a purely Lagrangian tracer is illustrated using worlds maps of the induced transport. This parametric transport equation would allow ocean-scale numerical models to include inertial effects of floating plastic when predicting or tracing the transport of pollutants.

Keywords: perturbation methods, plastic pollution transport, Stokes drift, wave flume experiments, wave-induced mean flow

Procedia PDF Downloads 121
673 Development of Loop Mediated Isothermal Amplification (Lamp) Assay for the Diagnosis of Ovine Theileriosis

Authors: Muhammad Fiaz Qamar, Uzma Mehreen, Muhammad Arfan Zaman, Kazim Ali

Abstract:

Ovine Theileriosis is a world-wide concern, especially in tropical and subtropical areas, due to having tick abundance that has received less awareness in different developed and developing areas due to less worth of sheep, low to the middle level of infection in different small ruminants herd. Across Asia, the prevalence reports have been conducted to provide equivalent calculation of flock and animal level prevalence of Theileriosisin animals. It is a challenge for veterinarians to timely diagnosis & control of Theileriosis and famers because of the nature of the organism and inadequacy of restricted plans to control. All most work is based upon the development of such a technique which should be farmer-friendly, less expensive, and easy to perform into the field. By the timely diagnosis of this disease will decrease the irrational use of the drugs, and other plan was to determine the prevalence of Theileriosis in District Jhang by using the conventional method, PCR and qPCR, and LAMP. We quantify the molecular epidemiology of T.lestoquardiin sheep from Jhang districts, Punjab, Pakistan. In this study, we concluded that the overall prevalence of Theileriosis was (32/350*100= 9.1%) in sheep by using Giemsa staining technique, whereas (48/350*100= 13%) is observed by using PCR technique (56/350*100=16%) in qPCR and the LAMP technique have shown up to this much prevalence percentage (60/350*100= 17.1%). The specificity and sensitivity also calculated in comparison with the PCR and LAMP technique. Means more positive results have been shown when the diagnosis has been done with the help of LAMP. And there is little bit of difference between the positive results of PCR and qPCR, and the least positive animals was by using Giemsa staining technique/conventional method. If we talk about the specificity and sensitivity of the LAMP as compared to PCR, The cross tabulation shows that the results of sensitivity of LAMP counted was 94.4%, and specificity of LAMP counted was 78%. Advances in scientific field must be upon reality based ideas which can lessen the gaps and hurdles in the way of scientific research; the lamp is one of such techniques which have done wonders in adding value and helping human at large. It is such a great biological diagnostic tools and has helped a lot in the proper diagnosis and treatment of certain diseases. Other methods for diagnosis, such as culture techniques and serological techniques, have exposed humans with great danger. However, with the help of molecular diagnostic technique like LAMP, exposure to such pathogens is being avoided in the current era Most prompt and tentative diagnosis can be made using LAMP. Other techniques like PCR has many disadvantages when compared to LAMP as PCR is a relatively expensive, time consuming, and very complicated procedure while LAMP is relatively cheap, easy to perform, less time consuming, and more accurate. LAMP technique has removed hurdles in the way of scientific research and molecular diagnostics, making it approachable to poor and developing countries.

Keywords: distribution, thelaria, LAMP, primer sequences, PCR

Procedia PDF Downloads 103
672 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process

Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc

Abstract:

In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.

Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown

Procedia PDF Downloads 359
671 Ascribing Identities and Othering: A Multimodal Discourse Analysis of a BBC Documentary on YouTube

Authors: Shomaila Sadaf, Margarethe Olbertz-Siitonen

Abstract:

This study looks at identity and othering in discourses around sensitive issues in social media. More specifically, the study explores the multimodal resources and narratives through which the other is formed, and identities are ascribed in online spaces. As an integral part of social life, media spaces have become an important site for negotiating and ascribing identities. In line with recent research, identity is seen hereas constructions of belonging which go hand in hand with processes of in- and out-group formations that in some cases may lead to othering. Previous findings underline that identities are neither fixed nor limited but rather contextual, intersectional, and interactively achieved. The goal of this study is to explore and develop an understanding of how people co-construct the ‘other’ and ascribe certain identities in social media using multiple modes. In the beginning of the year 2018, the British government decided to include relationships, sexual orientation, and sex education into the curriculum of state funded primary schools. However, the addition of information related to LGBTQ+in the curriculum has been met with resistance, particularly from religious parents.For example, the British Muslim community has voiced their concerns and protested against the actions taken by the British government. YouTube has been used by news companies to air video stories covering the protest and narratives of the protestors along with the position ofschool officials. The analysis centers on a YouTube video dealing with the protest ofa local group of parents against the addition of information about LGBTQ+ in the curriculum in the UK. The video was posted in 2019. By the time of this study, the videos had approximately 169,000 views andaround 6000 comments. In deference to multimodal nature of YouTube videos, this study utilizes multimodal discourse analysis as a method of choice. The study is still ongoing and therefore has not yet yielded any final results. However, the initial analysis indicates a hierarchy of ascribing identities in the data. Drawing on multimodal resources, the media works with social categorizations throughout the documentary, presenting and classifying involved conflicting parties in the light of their own visible and audible identifications. The protesters can be seen to construct a strong group identity as Muslim parents (e.g., clothing and reference to shared values). While the video appears to be designed as a documentary that puts forward facts, the media does not seem to succeed in taking a neutral position consistently throughout the video. At times, the use of images, soundsand language contributes to the formation of “us” vs. “them”, where the audience is implicitly encouraged to pick a side. Only towards the end of the documentary this problematic opposition is addressed and critically reflected through an expert interview that is – interestingly – visually located outside the previously presented ‘battlefield’. This study contributes to the growing understanding of the discursive construction of the ‘other’ in social media. Videos available online are a rich source for examining how the different social actors ascribe multiple identities and form the other.

Keywords: identity, multimodal discourse analysis, othering, youtube

Procedia PDF Downloads 114
670 Revolutionizing Healthcare Communication: The Transformative Role of Natural Language Processing and Artificial Intelligence

Authors: Halimat M. Ajose-Adeogun, Zaynab A. Bello

Abstract:

Artificial Intelligence (AI) and Natural Language Processing (NLP) have transformed computer language comprehension, allowing computers to comprehend spoken and written language with human-like cognition. NLP, a multidisciplinary area that combines rule-based linguistics, machine learning, and deep learning, enables computers to analyze and comprehend human language. NLP applications in medicine range from tackling issues in electronic health records (EHR) and psychiatry to improving diagnostic precision in orthopedic surgery and optimizing clinical procedures with novel technologies like chatbots. The technology shows promise in a variety of medical sectors, including quicker access to medical records, faster decision-making for healthcare personnel, diagnosing dysplasia in Barrett's esophagus, boosting radiology report quality, and so on. However, successful adoption requires training for healthcare workers, fostering a deep understanding of NLP components, and highlighting the significance of validation before actual application. Despite prevailing challenges, continuous multidisciplinary research and collaboration are critical for overcoming restrictions and paving the way for the revolutionary integration of NLP into medical practice. This integration has the potential to improve patient care, research outcomes, and administrative efficiency. The research methodology includes using NLP techniques for Sentiment Analysis and Emotion Recognition, such as evaluating text or audio data to determine the sentiment and emotional nuances communicated by users, which is essential for designing a responsive and sympathetic chatbot. Furthermore, the project includes the adoption of a Personalized Intervention strategy, in which chatbots are designed to personalize responses by merging NLP algorithms with specific user profiles, treatment history, and emotional states. The synergy between NLP and personalized medicine principles is critical for tailoring chatbot interactions to each user's demands and conditions, hence increasing the efficacy of mental health care. A detailed survey corroborated this synergy, revealing a remarkable 20% increase in patient satisfaction levels and a 30% reduction in workloads for healthcare practitioners. The poll, which focused on health outcomes and was administered to both patients and healthcare professionals, highlights the improved efficiency and favorable influence on the broader healthcare ecosystem.

Keywords: natural language processing, artificial intelligence, healthcare communication, electronic health records, patient care

Procedia PDF Downloads 76
669 Distribution of Micro Silica Powder at a Ready Mixed Concrete

Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han

Abstract:

Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.

Keywords: micro silica, distribution, ready mixed concrete, foam

Procedia PDF Downloads 219
668 Stability of a Biofilm Reactor Able to Degrade a Mixture of the Organochlorine Herbicides Atrazine, Simazine, Diuron and 2,4-Dichlorophenoxyacetic Acid to Changes in the Composition of the Supply Medium

Authors: I. Nava-Arenas, N. Ruiz-Ordaz, C. J. Galindez-Mayer, M. L. Luna-Guido, S. L. Ruiz-López, A. Cabrera-Orozco, D. Nava-Arenas

Abstract:

Among the most important herbicides, the organochlorine compounds are of considerable interest due to their recalcitrance to the chemical, biological, and photolytic degradation, their persistence in the environment, their mobility, and their bioacummulation. The most widely used herbicides in North America are primarily 2,4-dichlorophenoxyacetic acid (2,4-D), the triazines (atrazine and simazine), and to a lesser extent diuron. The contamination of soils and water bodies frequently occurs by mixtures of these xenobiotics. For this reason, in this work, the operational stability to changes in the composition of the medium supplied to an aerobic biofilm reactor was studied. The reactor was packed with fragments of volcanic rock that retained a complex microbial film, able to degrade a mixture of organochlorine herbicides atrazine, simazine, diuron and 2,4-D, and whose members have microbial genes encoding the main catabolic enzymes atzABCD, tfdACD and puhB. To acclimate the attached microbial community, the biofilm reactor was fed continuously with a mineral minimal medium containing the herbicides (in mg•L-1): diuron, 20.4; atrazine, 14.2, simazine, 11.4, and 2,4-D, 59.7, as carbon and nitrogen sources. Throughout the bioprocess, removal efficiencies of 92-100% for herbicides, 78-90% for COD, 92-96% for TOC and 61-83% for dehalogenation were reached. In the microbial community, the genes encoding catabolic enzymes of different herbicides tfdACD, puhB and, occasionally, the genes atzA and atzC were detected. After the acclimatization, the triazine herbicides were eliminated from the mixture formulation. Volumetric loading rates of the mixture 2,4-D and diuron were continuously supplied to the reactor (1.9-21.5 mg herbicides •L-1 •h-1). Along the bioprocess, the removal efficiencies obtained were 86-100% for the mixture of herbicides, 63-94% for for COD, 90-100% for COT, and dehalogenation values of 63-100%. It was also observed that the genes encoding the enzymes in the catabolism of both herbicides, tfdACD and puhB, were consistently detected; and, occasionally, the atzA and atzC. Subsequently, the triazine herbicide atrazine and simazine were restored to the medium supply. Different volumetric charges of this mixture were continuously fed to the reactor (2.9 to 12.6 mg herbicides •L-1 •h-1). During this new treatment process, removal efficiencies of 65-95% for the mixture of herbicides, 63-92% for COD, 66-89% for TOC and 73-94% of dehalogenation were observed. In this last case, the genes tfdACD, puhB and atzABC encoding for the enzymes involved in the catabolism of the distinct herbicides were consistently detected. The atzD gene, encoding the cyanuric hydrolase enzyme, could not be detected, though it was determined that there was partial degradation of cyanuric acid. In general, the community in the biofilm reactor showed some catabolic stability, adapting to changes in loading rates and composition of the mixture of herbicides, and preserving their ability to degrade the four herbicides tested; although, there was a significant delay in the response time to recover to degradation of the herbicides.

Keywords: biodegradation, biofilm reactor, microbial community, organochlorine herbicides

Procedia PDF Downloads 435
667 The Product Innovation Using Nutraceutical Delivery System on Improving Growth Performance of Broiler

Authors: Kitti Supchukun, Kris Angkanaporn, Teerapong Yata

Abstract:

The product innovation using a nutraceutical delivery system on improving the growth performance of broilers is the product planning and development to solve the antibiotics banning policy incurred in the local and global livestock production system. Restricting the use of antibiotics can reduce the quality of chicken meat and increase pathogenic bacterial contamination. Although other alternatives were used to replace antibiotics, the efficacy was inconsistent, reflecting on low chicken growth performance and contaminated products. The product innovation aims to effectively deliver the selected active ingredients into the body. This product is tested on the pharmaceutical lab scale and on the farm-scale for market feasibility in order to create product innovation using the nutraceutical delivery system model. The model establishes the product standardization and traceable quality control process for farmers. The study is performed using mixed methods. Starting with a qualitative method to find the farmers' (consumers) demands and the product standard, then the researcher used the quantitative research method to develop and conclude the findings regarding the acceptance of the technology and product performance. The survey has been sent to different organizations by random sampling among the entrepreneur’s population including integrated broiler farm, broiler farm, and other related organizations. The mixed-method results, both qualitative and quantitative, verify the user and lead users' demands since they provide information about the industry standard, technology preference, developing the right product according to the market, and solutions for the industry problems. The product innovation selected nutraceutical ingredients that can solve the following problems in livestock; bactericidal, anti-inflammation, gut health, antioxidant. The combinations of the selected nutraceutical and nanostructured lipid carriers (NLC) technology aim to improve chemical and pharmaceutical components by changing the structure of active ingredients into nanoparticle, which will be released in the targeted location with accurate concentration. The active ingredients in nanoparticle form are more stable, elicit antibacterial activity against pathogenic Salmonella spp and E.coli, balance gut health, have antioxidant and anti-inflammation activity. The experiment results have proven that the nutraceuticals have an antioxidant and antibacterial activity which also increases the average daily gain (ADG), reduces feed conversion ratio (FCR). The results also show a significant impact on the higher European Performance Index that can increase the farmers' profit when exporting. The product innovation will be tested in technology acceptance management methods from farmers and industry. The production of broiler and commercialization analyses are useful to reduce the importation of animal supplements. Most importantly, product innovation is protected by intellectual property.

Keywords: nutraceutical, nano structure lipid carrier, anti-microbial drug resistance, broiler, Salmonella

Procedia PDF Downloads 178
666 Assessment of Current and Future Opportunities of Chemical and Biological Surveillance of Wastewater for Human Health

Authors: Adam Gushgari

Abstract:

The SARS-CoV-2 pandemic has catalyzed the rapid adoption of wastewater-based epidemiology (WBE) methodologies both domestically and internationally. To support the rapid scale-up of pandemic-response wastewater surveillance systems, multiple federal agencies (i.e. US CDC), non-government organizations (i.e. Water Environment Federation), and private charities (i.e. Bill and Melinda Gates Foundation) have funded over $220 million USD supporting development and expanding equitable access of surveillance methods. Funds were primarily distributed directly to municipalities under the CARES Act (90.6%), followed by academic projects (7.6%), and initiatives developed by private companies (1.8%). In addition to federal funding for wastewater monitoring primarily conducted at wastewater treatment plants, state/local governments and private companies have leveraged wastewater sampling to obtain health and lifestyle data on student, prison inmate, and employee populations. We explore the viable paths for expansion of the WBE m1ethodology across a variety of analytical methods; the development of WBE-specific samplers and real-time wastewater sensors; and their application to various governments and private sector industries. Considerable investment in, and public acceptance of WBE suggests the methodology will be applied to other future notifiable diseases and health risks. Early research suggests that WBE methods can be applied to a host of additional “biological insults” including communicable diseases and pathogens, such as influenza, Cryptosporidium, Giardia, mycotoxin exposure, hepatitis, dengue, West Nile, Zika, and yellow fever. Interest in chemical insults is also likely, providing community health and lifestyle data on narcotics consumption, use of pharmaceutical and personal care products (PPCP), PFAS and hazardous chemical exposure, and microplastic exposure. Successful application of WBE to monitor analytes correlated with carcinogen exposure, community stress prevalence, and dietary indicators has also been shown. Additionally, technology developments of in situ wastewater sensors, WBE-specific wastewater samplers, and integration of artificial intelligence will drastically change the landscape of WBE through the development of “smart sewer” networks. The rapid expansion of the WBE field is creating significant business opportunities for professionals across the scientific, engineering, and technology industries ultimately focused on community health improvement.

Keywords: wastewater surveillance, wastewater-based epidemiology, smart cities, public health, pandemic management, substance abuse

Procedia PDF Downloads 108
665 A Study of Applying the Use of Breathing Training to Palliative Care Patients, Based on the Bio-Psycho-Social Model

Authors: Wenhsuan Lee, Yachi Chang, Yingyih Shih

Abstract:

In clinical practices, it is common that while facing the unknown progress of their disease, palliative care patients may easily feel anxious and depressed. These types of reactions are a cause of psychosomatic diseases and may also influence treatment results. However, the purpose of palliative care is to provide relief from all kinds of pains. Therefore, how to make patients more comfortable is an issue worth studying. This study adopted the “bio-psycho-social model” proposed by Engel and applied spontaneous breathing training, in the hope of seeing patients’ psychological state changes caused by their physiological state changes, improvements in their anxious conditions, corresponding adjustments of their cognitive functions, and further enhancement of their social functions and the social support system. This study will be a one-year study. Palliative care outpatients will be recruited and assigned to the experimental group or the control group for six outpatient visits (once a month), with 80 patients in each group. The patients of both groups agreed that this study can collect their physiological quantitative data using an HRV device before the first outpatient visit. They also agreed to answer the “Beck Anxiety Inventory (BAI)”, the “Taiwanese version of the WHOQOL-BREF questionnaire” before the first outpatient visit, to fill a self-report questionnaire after each outpatient visit, and to answer the “Beck Anxiety Inventory (BAI)”, the “Taiwanese version of the WHOQOL-BREF questionnaire” after the last outpatient visit. The patients of the experimental group agreed to receive the breathing training under HRV monitoring during the first outpatient visit of this study. Before each of the following three outpatient visits, they were required to fill a self-report questionnaire regarding their breathing practices after going home. After the outpatient visits, they were taught how to practice breathing through an HRV device and asked to practice it after going home. Later, based on the results from the HRV data analyses and the pre-tests and post-tests of the “Beck Anxiety Inventory (BAI)”, the “Taiwanese version of the WHOQOL-BREF questionnaire”, the influence of the breathing training in the bio, psycho, and social aspects were evaluated. The data collected through the self-report questionnaires of the patients of both groups were used to explore the possible interfering factors among the bio, psycho, and social changes. It is expected that this study will support the “bio-psycho-social model” proposed by Engel, meaning that bio, psycho, and social supports are closely related, and that breathing training helps to transform palliative care patients’ psychological feelings of anxiety and depression, to facilitate their positive interactions with others, and to improve the quality medical care for them.

Keywords: palliative care, breathing training, bio-psycho-social model, heart rate variability

Procedia PDF Downloads 259
664 Nano-Immunoassay for Diagnosis of Active Schistosomal Infection

Authors: Manal M. Kame, Hanan G. El-Baz, Zeinab A.Demerdash, Engy M. Abd El-Moneem, Mohamed A. Hendawy, Ibrahim R. Bayoumi

Abstract:

There is a constant need to improve the performance of current diagnostic assays of schistosomiasis as well as develop innovative testing strategies to meet new testing challenges. This study aims at increasing the diagnostic efficiency of monoclonal antibody (MAb)-based antigen detection assays through gold nanoparticles conjugated with specific anti-Schistosoma mansoni monoclonal antibodies. In this study, several hybidoma cell lines secreting MAbs against adult worm tegumental Schistosoma antigen (AWTA) were produced at Immunology Department of Theodor Bilharz Research Institute and preserved in liquid nitrogen. One MAb (6D/6F) was chosen for this study due to its high reactivity to schistosome antigens with highest optical density (OD) values. Gold nanoparticles (AuNPs) were functionalized and conjugated with MAb (6D/6F). The study was conducted on serum samples of 116 subjects: 71 patients with S. mansoni eggs in their stool samples group (gp 1), 25 with other parasites (gp2) and 20 negative healthy controls (gp3). Patients in gp1 were further subdivided according to egg count in their stool samples into Light infection {≤ 50 egg per gram(epg) (n= 17)}, moderate {51-100 epg (n= 33)} and severe infection {>100 epg(n= 21)}. Sandwich ELISA was performed using (AuNPs -MAb) for detection of circulating schistosomal antigen (CSA) levels in serum samples of all groups and the results were compared with that after using MAb/ sandwich ELISA system. Results Gold- MAb/ ELISA system reached a lower detection limit of 10 ng/ml compared to 85 ng/ml on using MAb/ ELISA and the optimal concentrations of AuNPs -MAb were found to be 12 folds less than that of MAb/ ELISA system for detection of CSA. The sensitivity and specificity of sandwich ELISA for detection of CSA levels using AuNPs -MAb were 100% & 97.8 % respectively compared to 87.3% &93.38% respectively on using MAb/ ELISA system. It was found that CSA was detected in 9 out of 71 S.mansoni infected patients on using AuNPs - MAb/ ELISA system and was not detected by MAb/ ELISA system. All those patients (9) was found to have an egg count below 50 epg feces (patients with light infections). ROC curve analyses revealed that sandwich ELISA using gold-MAb was an excellent diagnostic investigator that could differentiate Schistosoma patients from healthy controls, on the other hand it revealed that sandwich ELISA using MAb was not accurate enough as it could not recognize nine out of 71 patients with light infections. Conclusion Our data demonstrated that: Loading gold nanoparticles with MAb (6D/6F) increases the sensitivity and specificity of sandwich ELISA for detection of CSA, thus active (early) and light infections could be easily detected. Moreover this binding will decrease the amount of MAb consumed in the assay and lower the coast. The significant positive correlation that was detected between ova count (intensity of infection) and OD reading in sandwich ELISA using gold- MAb enables its use to detect the severity of infections and follow up patients after treatment for monitoring of cure.

Keywords: Schistosomiasis, nanoparticles, gold, monoclonal antibodies, ELISA

Procedia PDF Downloads 371
663 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 129
662 The Temporal Pattern of Bumble Bees in Plant Visiting

Authors: Zahra Shakoori, Farid Salmanpour

Abstract:

Pollination services are a vital service for the ecosystem to maintain environmental stability. The decline of pollinators can disrupt the ecological balance by affecting components of biodiversity. Bumble bees are crucial pollinators, playing a vital role in maintaining plant diversity. This study investigated the temporal patterns of their visitation to flowers in Kiasar National Park, Iran. Observations were conducted in Jun 2024, totaling 442 person-minutes of observation. Five species of bumble bees were identified. The study revealed that they consistently visited an average of 12-15 flowers per minute, regardless of species. The findings highlight the importance of protecting natural habitats, where their populations are thriving in the absence of human-induced stressors. This study was conducted in Kiasar National Park, located in the southeast of Mazandaran, northern Iran. The surveyed area, at an altitude of 1800-2200 meters, includes both forest and pasture. Bumble bee surveys were carried out on sunny days from June 2024, starting at dawn and ending at sunset. To avoid double-counting, we systematically searched for foraging habitats on low-sloping ridges with high mud density, frequently moving between patches. We recorded bumble bee visits to flowers and plant species per minute using direct observation, a stopwatch, and a pre-prepared form. We used statistical analysis of variance (ANOVA) with a confidence level of 95% to examine potential differences in foraging rates across different bumble bee species, flowers, plant bases, and plant species visited. Bumble bee identification relied on morphological indicators. A total of 442 person-minutes of bumble bee observations were recorded. Five species of bumble bees (Bombus fragrans, Bombus haematurus, Bombus lucorum, Bombus melanurus, Bombus terrestris) were identified during the study. The results of this study showed that the visits of bumble bees to flower sources were not different from each other. In general, bumble bees visit an average of 12-15 flowers every 60 seconds. In addition, at the same time they visit between 3-5 plant bases. Finally, they visit an average of 1 to 3 plant species per minute. While many taxa contribute to pollination, insects—especially bees—are crucial for maintaining plant diversity and ecosystem functions. As plant diversity increases, the stopping rate of pollinating insects rises, which reduces their foraging activity. Bumble bees, therefore, stop more frequently in natural areas than in agricultural fields due to higher plant diversity. Our findings emphasize the need to protect natural habitats like Kiasar National Park, where bumble bees thrive without human-induced stressors like pesticides, livestock grazing, and pollution. With bumble bee populations declining globally, further research is essential to understand their behavior in different environments and develop effective conservation strategies to protect them.

Keywords: bumble bees, pollination, pollinator, plant diversity, Iran

Procedia PDF Downloads 30
661 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 80
660 The Model of Open Cooperativism: The Case of Open Food Network

Authors: Vangelis Papadimitropoulos

Abstract:

This paper is part of the research program “Techno-Social Innovation in the Collaborative Economy”, funded by the Hellenic Foundation for Research and Innovation (H.F.R.I.) for the years 2022-2024. The paper showcases the Open Food Network (OFN) as an open-sourced digital platform supporting short food supply chains in local agricultural production and consumption. The paper outlines the research hypothesis, the theoretical framework, and the methodology of research as well as the findings and conclusions. Research hypothesis: The model of open cooperativism as a vehicle for systemic change in the agricultural sector. Theoretical framework: The research reviews the OFN as an illustrative case study of the three-zoned model of open cooperativism. The OFN is considered a paradigmatic case of the model of open cooperativism inasmuch as it produces commons, it consists of multiple stakeholders including ethical market entities, and it is variously supported by local authorities across the globe, the latter prefiguring the mini role of a partner state. Methodology: Research employs Ernesto Laclau and Chantal Mouffe’s discourse analysis -elements, floating signifiers, nodal points, discourses, logics of equivalence and difference- to analyse the breadth of empirical data gathered through literature review, digital ethnography, a survey, and in-depth interviews with core OFN members. Discourse analysis classifies OFN floating signifiers, nodal points, and discourses into four themes: value proposition, governance, economic policy, and legal policy. Findings: OFN floating signifiers align around the following nodal points and discourses: “digital commons”, “short food supply chains”, “sustainability”, “local”, “the elimination of intermediaries” and “systemic change”. The current research identifies a lack of common ground of what the discourse of “systemic change” signifies on the premises of the OFN’s value proposition. The lack of a common mission may be detrimental to the formation of a common strategy that would be perhaps deemed necessary to bring about systemic change in agriculture. Conclusions: Drawing on Laclau and Mouffe’s discourse theory of hegemony, research introduces a chain of equivalence by aligning discourses such as “agro-ecology”, “commons-based peer production”, “partner state” and “ethical market entities” under the model of open cooperativism, juxtaposed against the current hegemony of neoliberalism, which articulates discourses such as “market fundamentalism”, “privatization”, “green growth” and “the capitalist state” to promote corporatism and entrepreneurship. Research makes the case that for OFN to further agroecology and challenge the current hegemony of industrial agriculture, it is vital that it opens up its supply chains into equivalent sectors of the economy, civil society, and politics to form a chain of equivalence linking together ethical market entities, the commons and a partner state around the model of open cooperativism.

Keywords: sustainability, the digital commons, open cooperativism, innovation

Procedia PDF Downloads 72
659 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer

Authors: Abdoulie K. Ceesay

Abstract:

Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.

Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)

Procedia PDF Downloads 86
658 Therapeutic Challenges in Treatment of Adults Bacterial Meningitis Cases

Authors: Sadie Namani, Lindita Ajazaj, Arjeta Zogaj, Vera Berisha, Bahrije Halili, Luljeta Hasani, Ajete Aliu

Abstract:

Background: The outcome of bacterial meningitis is strongly related to the resistance of bacterial pathogens to the initial antimicrobial therapy. The objective of the study was to analyze the initial antimicrobial therapy, the resistance of meningeal pathogens and the outcome of adults bacterial meningitis cases. Materials/methods: This prospective study enrolled 46 adults older than 16 years of age, treated for bacterial meningitis during the years 2009 and 2010 at the infectious diseases clinic in Prishtinë. Patients are categorized into specific age groups: > 16-26 years of age (10 patients), > 26-60 years of age (25 patients) and > 60 years of age (11 patients). All p-values < 0.05 were considered statistically significant. Data were analyzed using Stata 7.1 and SPSS 13. Results: During the two year study period 46 patients (28 males) were treated for bacterial meningitis. 33 patients (72%) had a confirmed bacterial etiology; 13 meningococci, 11 pneumococci, 7 gram-negative bacilli (Ps. aeruginosa 2, Proteus sp. 2, Acinetobacter sp. 2 and Klebsiella sp. 1 case) and 2 staphylococci isolates were found. Neurological complications developed in 17 patients (37%) and the overall mortality rate was 13% (6 deaths). Neurological complications observed were: cerebral abscess (7/46; 15.2%), cerebral edema (4/46; 8.7%); haemiparesis (3/46; 6.5%); recurrent seizures (2/46; 4.3%), and single cases of thrombosis sinus cavernosus, facial nerve palsy and decerebration (1/46; 2.1%). The most common meningeal pathogens were meningococcus in the youngest age group, gram negative-bacilli in second age group and pneumococcus in eldery age group. Initial single-agent antibiotic therapy (ceftriaxone) was used in 17 patients (37%): in 60% of patients in the youngest age group and in 44% of cases in the second age group. 29 patients (63%) were treated with initial dual-agent antibiotic therapy; ceftriaxone in combination with vancomycin or ampicillin. Ceftriaxone and ampicillin were the most commonly used antibiotics for the initial empirical therapy in adults > 50 years of age. All adults > 60 years of age were treated with the initial dual-agent antibiotic therapy as in this age group was recorded the highest mortality rate (M=27%) and adverse outcome (64%). Resistance of pathogens to antimicrobics was recorded in cases caused by gram-negative bacilli and was associated with greater risk for developing neurological complications (p=0.09). None of the gram-negative bacilli were resistant to carbapenems; all were resistant to ampicillin while 5/7 isolates were resistant to cefalosporins. Resistance of meningococci and pneumococci to beta-lactams was not recorded. There were no statistical differences in the occurrence of neurological complications (p > 0.05), resistance of meningeal pathogens to antimicrobics (p > 0.05) and the inital antimicrobial therapy (one vs. two antibiotics) concerning group-ages in adults. Conclusions: The initial antibiotic therapy with ceftriaxone alone or in combination with vancomycin or ampicillin did not cover cases caused by gram-negative bacilli.

Keywords: adults, bacterial meningitis, outcomes, therapy

Procedia PDF Downloads 173
657 Solution Thermodynamics, Photophysical and Computational Studies of TACH2OX, a C-3 Symmetric 8-Hydroxyquinoline: Abiotic Siderophore Analogue of Enterobactin

Authors: B. K. Kanungo, Monika Thakur, Minati Baral

Abstract:

8-hydroxyquinoline, (8HQ), experiences a renaissance due to its utility as a building block in metallosupramolecular chemistry and its versatile use of its derivatives in various fields of analytical chemistry, materials science, and pharmaceutics. It forms stable complexes with a variety of metal ions. Assembly of more than one such unit to form a polydentate chelator enhances its coordinating ability and the related properties due to the chelate effect resulting in high stability constant. Keeping in view the above, a nonadentate chelator N-[3,5-bis(8-hydroxyquinoline-2-amido)cyclohexyl]-8-hydroxyquinoline-2-carboxamide, (TACH2OX), containing a central cis,cis-1,3,5-triaminocyclohexane appended to three 8-hydroxyquinoline at 2-position through amide linkage is developed, and its solution thermodynamics, photophysical and Density Functional Theory (DFT) studies were undertaken. The synthesis of TACH2OX was carried out by condensation of cis,cis-1,3,5-triaminocyclohexane, (TACH) with 8‐hydroxyquinoline‐2‐carboxylic acid. The brown colored solid has been fully characterized through melting point, infrared, nuclear magnetic resonance, electrospray ionization mass and electronic spectroscopy. In solution, TACH2OX forms protonated complexes below pH 3.4, which consecutively deprotonates to generate trinegative ion with the rise of pH. Nine protonation constants for the ligand were obtained that ranges between 2.26 to 7.28. The interaction of the chelator with two trivalent metal ion Fe3+ and Al3+ were studied in aqueous solution at 298 K. The metal-ligand formation constants (ML) obtained by potentiometric and spectrophotometric method agree with each other. The protonated and hydrolyzed species were also detected in the system. The in-silico studies of the ligand, as well as the complexes including their protonated and deprotonated species assessed by density functional theory technique, gave an accurate correlation with each observed properties such as the protonation constants, stability constants, infra-red, nmr, electronic absorption and emission spectral bands. The nature of electronic and emission spectral bands in terms of number and type were ascertained from time-dependent density functional theory study and the natural transition orbitals (NTO). The global reactivity indices parameters were used for comparison of the reactivity of the ligand and the complex molecules. The natural bonding orbital (NBO) analysis could successfully describe the structure and bonding of the metal-ligand complexes specifying the percentage of contribution in atomic orbitals in the creation of molecular orbitals. The obtained high value of metal-ligand formation constants indicates that the newly synthesized chelator is a very powerful synthetic chelator. The minimum energy molecular modeling structure of the ligand suggests that the ligand, TACH2OX, in a tripodal fashion firmly coordinates to the metal ion as hexa-coordinated chelate displaying distorted octahedral geometry by binding through three sets of N, O- donor atoms, present in each pendant arm of the central tris-cyclohexaneamine tripod.

Keywords: complexes, DFT, formation constant, TACH2OX

Procedia PDF Downloads 150
656 A Multi-Perspective, Qualitative Study into Quality of Life for Elderly People Living at Home and the Challenges for Professional Services in the Netherlands

Authors: Hennie Boeije, Renate Verkaik, Joke Korevaar

Abstract:

In Dutch national policy, it is promoted that the elderly remain living at home longer. They are less often admitted to a nursing home or only later in life. While living at home, it is important that they experience a good quality of life. Care providers in primary care support this. In this study, it was investigated what quality of life means for the elderly and which characteristics care should have that supports living at home longer with quality of life. To explore this topic, a qualitative methodology was used. Four focus groups were conducted: two with elderly people who live at home and their family caregivers, one with district nurses employed in-home care services and one with elderly care physicians working in primary care. Next to this individual interviews were employed with general practitioners (GPs). In total 32 participants took part in the study. The data were thematically analysed with MaxQDA software for qualitative analysis and reported. Quality of life is a multi-faceted term for elderly. The essence of their description is that they can still undertake activities that matter to them. Good physical health, mental well-being and social connections enable them to do this. Own control over their life is important for some. They are of opinion that how they experience life and manage old age is related to their resilience and coping. Key terms in the definitions of quality of life by GPs are also physical and mental health and social contacts. These are the three pillars. Next, to this elderly care, physicians mention security and safety and district nurses add control over their own life and meaningful daily activities. They agree that with frail elderly people, the balance is delicate and a change in one of the three pillars can cause it to collapse like a house of cards. When discussing what support is needed, professionals agree on access to care with a low threshold, prevention, and life course planning. When care is provided in a timely manner, a worsening of the situation can be prevented. They agree that hospital care often is not needed since most of the problems with the elderly have to do with care and security rather than with a cure per se. GPs can consult elderly care physicians to lower their workload and to bring in specific knowledge. District nurses often signal changes in the situation of the elderly. According to them, the elderly predominantly need someone to watch over them and provide them with a feeling of security. Life course planning and advance care planning can contribute to uniform treatment in line with older adults’ wishes. In conclusion, all stakeholders, including elderly persons, agree on what entails quality of life and the quality of care that is needed to support that. A future challenge is to shape conditions for the right skill mix of professionals, cooperation between the professions and breaking down differences in financing and supply. For the elderly, the challenge is preparing for aging.

Keywords: elderly living at home, quality of life, quality of care, professional cooperation, life course planning, advance care planning

Procedia PDF Downloads 128
655 Gamifying Content and Language Integrated Learning: A Study Exploring the Use of Game-Based Resources to Teach Primary Mathematics in a Second Language

Authors: Sarah Lister, Pauline Palmer

Abstract:

Research findings presented within this paper form part of a larger scale collaboration between academics at Manchester Metropolitan University and a technology company. The overarching aims of this project focus on developing a series of game-based resources to promote the teaching of aspects of mathematics through a second language (L2) in primary schools. This study explores the potential of game-based learning (GBL) as a dynamic way to engage and motivate learners, making learning fun and purposeful. The research examines the capacity of GBL resources to provide a meaningful and purposeful context for CLIL. GBL is a powerful learning environment and acts as an effective vehicle to promote the learning of mathematics through an L2. The fun element of GBL can minimise stress and anxiety associated with mathematics and L2 learning that can create barriers. GBL provides one of the few safe domains where it is acceptable for learners to fail. Games can provide a life-enhancing experience for learners, revolutionizing the routinized ways of learning through fusing learning and play. This study argues that playing games requires learners to think creatively to solve mathematical problems, using the L2 in order to progress, which can be associated with the development of higher-order thinking skills and independent learning. GBL requires learners to engage appropriate cognitive processes with increased speed of processing, sensitivity to environmental inputs, or flexibility in allocating cognitive and perceptual resources. At surface level, GBL resources provide opportunities for learners to learn to do things. Games that fuse subject content and appropriate learning objectives have the potential to make learning academic subjects more learner-centered, promote learner autonomy, easier, more enjoyable, more stimulating and engaging and therefore, more effective. Data includes observations of the children playing the games and follow up group interviews. Given that learning as a cognitive event cannot be directly observed or measured. A Cognitive Discourse Functions (CDF) construct was used to frame the research, to map the development of learners’ conceptual understanding in an L2 context and as a framework to observe the discursive interactions that occur learner to learner and between learner and teacher. Cognitively, the children were required to engage with mathematical content, concepts and language to make decisions quickly, to engage with the gameplay to reason, solve and overcome problems and learn through experimentation. The visual elements of the games supported the learning of new concepts. Children recognised the value of the games to consolidate their mathematical thinking and develop their understanding of new ideas. The games afforded them time to think and reflect. The teachers affirmed that the games provided meaningful opportunities for the learners to practise the language. The findings of this research support the view that using the game-based resources supported children’s grasp of mathematical ideas and their confidence and ability to use the L2. Engaging with the content and language through the games led to deeper learning.

Keywords: CLIL, gaming, language, mathematics

Procedia PDF Downloads 142
654 Clothing Features of Greek Orthodox Woman Immigrants in Konya (Iconium)

Authors: Kenan Saatcioglu, Fatma Koc

Abstract:

When the immigration is considered, it has been found that communities were continuously influenced by the immigrations from the date of the emergence of mankind until the day. The political, social and economic reasons seen at the various periods caused the communities go to new places from where they have lived before. Immigrations have occurred as a result of unequal opportunities among communities, social exclusion and imposition, compulsory homeland emerging politically, exile and war. Immigration is a social tool that is defined as a geographical relocation of people from a housing unit (city, village etc.) to another to spend all or part of their future lives. Immigrations have an effect on the history of humanity directly or indirectly, revealing new dimensions for communities to evaluate the concept of homeland. With these immigrations, communities carried their cultural values to their new settlements leading to a new interaction process. With this interaction process both migrant and native community cultures were reshaped and richer cultural values emerged. The clothes of these communities are amongst the most important visual evidence of this rich cultural interaction. As a result of these immigrations, communities affected each other culture’s clothing mutually and they started adding features of other cultures to the garments of its own, resulting new clothing cultures in time. The cultural and historical differences between these communities are seem to be the most influential factors of keeping the clothing cultures of the people alive. The most important and tragic of these immigrations took place after the Turkish War of Independence that was fought against Greece in 1922. The concept of forced immigration was a result of Lausanne Peace Treaty, which was signed between Turkish and Greek governments on 30th January 1923. As a result Greek Orthodoxes, who lived in Turkey (Anatolia and Thrace) and Muslim Turks, who lived in Greece were forced to immigrate. In this study, clothing features of Greek Orthodox woman immigrants who emigrated from Turkey to Greece in the period of the ‘1923 Greek-Turkish Population Exchange’ are aimed to be examined. In the study using the descriptive research method, before the ‘1923 Greek-Turkish Population Exchange’, the clothings belong to Greek Orthodox woman immigrants who lived in ‘Konya (Iconium)’ region in the Ottoman Empire, are discussed. In the study that is based on two different clothings belonging to ‘Konya (Iconium)’ region in the clothing collection archive at the ‘National Historical Museum’ in Greece, clothings of the Greek Orthodox woman immigrants are discussed with cultural norms, beliefs, values as well as in terms of form, ornamentation and dressing styles. Technical drawings are provided demonstrating formal features of the clothing parts that formed clothing integrity and their properties are described with the use of related literature in this study. This study is of importance that that it contains Greek Orthodox refugees’ clothings that are found in the clothing collection archive at the ‘National Historical Museum’ in Greece reflecting the cultural identities, providing information and documentation on the clothing features of the ‘1923 Greek-Turkish Population Exchange’.

Keywords: clothing, Greece, Greek Orthodoxes, immigration, national historical museum, Turkey

Procedia PDF Downloads 248
653 Arthroscopic Superior Capsular Reconstruction Using the Long Head of the Biceps Tendon (LHBT)

Authors: Ho Sy Nam, Tang Ha Nam Anh

Abstract:

Background: Rotator cuff tears are a common problem in the aging population. The prevalence of massive rotator cuff tears varies in some studies from 10% to 40%. Of irreparable rotator cuff tears (IRCTs), which are mostly associated with massive tear size, 79% are estimated to have recurrent tears after surgical repair. Recent studies have shown that superior capsule reconstruction (SCR) in massive rotator cuff tears can be an efficient technique with optimistic clinical scores and preservation of stable glenohumeral stability. Superior capsule reconstruction techniques most commonly use either fascia lata autograft or dermal allograft, both of which have their own benefits and drawbacks (such as the potential for donor site issues, allergic reactions, and high cost). We propose a simple technique for superior capsule reconstruction that involves using the long head of the biceps tendon as a local autograft; therefore, the comorbidities related to graft harvesting are eliminated. The long head of the biceps tendon proximal portion is relocated to the footprint and secured as the SCR, serving to both stabilize the glenohumeral joint and maintain vascular supply to aid healing. Objective: The purpose of this study is to assess the clinical outcomes of patients with large to massive RCTs treated by SCR using LHBT. Materials and methods: A study was performed of consecutive patients with large to massive RCTs who were treated by SCR using LHBT between January 2022 and December 2022. We use one double-loaded suture anchor to secure the long head of the biceps to the middle of the footprint. Two more anchors are used to repair the rotator cuff using a single-row technique, which is placed anteriorly and posteriorly on the lateral side of the previously transposed LHBT. Results: The 3 men and 5 women had an average age of 61.25 years (range 48 to 76 years) at the time of surgery. The average follow-up was 8.2 months (6 to 10 months) after surgery. The average preoperative ASES was 45.8, and the average postoperative ASES was 85.83. The average postoperative UCLA score was 29.12. VAS score was improved from 5.9 to 1.12. The mean preoperative ROM of forward flexion and external rotation of the shoulder was 720 ± 160 and 280 ± 80, respectively. The mean postoperative ROM of forward flexion and external rotation were 1310 ± 220 and 630 ± 60, respectively. There were no cases of progression of osteoarthritis or rotator cuff muscle atrophy. Conclusion: SCR using LHBT is considered a treatment option for patients with large or massive RC tears. It can restore superior glenohumeral stability and function of the shoulder joint and can be an effective procedure for selected patients, helping to avoid progression to cuff tear arthropathy.

Keywords: superior capsule reconstruction, large or massive rotator cuff tears, the long head of the biceps, stabilize the glenohumeral joint

Procedia PDF Downloads 77
652 Integrating the Principles of Sustainability and Corporate Social Responsibility (CSR): By Engaging the India Inc. With Sustainable Development Goals (SDGs)

Authors: Radhika Ralhan

Abstract:

With the formalization of 2030, Global Agenda for Sustainable Development nations have instantaneously geared up their efforts towards the implementation of a comprehensive list of global goals. The criticality of Sustainable Development Goals (SDGs) is imperative, as it will define the course and pace of development for the next 15 years. This development will entail transformational shifts towards a green and inclusive growth. Leadership, investments and technology will constitute as key ingredients of this transformational shift and governance will emerge as a one of the most significant driver of the global 2030 agenda. Corporate Governance is viewed as one of the key force to accelerate the momentum of SDGs and initiate these transformational shifts. Many senior level leaders have reinstated their conviction that adopting a triple bottom line approach will play an imperative role in transforming the entire industrial sector. In the Indian context, the above occurrence bears an intriguing facet, as the framing of SDGs in the global scenario coincided with the emergence of mandatory Corporate Social Responsibility (CSR) Rules in India at national level. As one of the leading democracies in the world, India is among few countries to formally mandate companies to spend 2% from their CSR funds under Section 135 of The New Companies Act 2013. The overarching framework of SDGs correlates to the areas of CSR interventions as mentioned in the Schedule VII of Section 135. As one of the legitimate stakeholders, business leaders have expressed their commitments to their respective governments, to reorient the entire fabric of their companies to scale up global priorities. This is explicitly seen in the case of India where leading business entities have converged national government priorities of Clean India, Make in India and Skill India by actively participating in the campaigns and incorporating these programmes within the ambit of their CSR policies. However, the CSR Act has received mixed responses with associated concerns such as the onus of doing what the government has to do, mandatory reporting mechanisms, policy disclosures, personnel handling CSR portfolios etc. The overall objective of the paper, therefore, rests in analyzing the discourse of CSR and the perspectives of Indian Inc. in imbibing the principles of SDGs within their business polices and operations. Through primary and secondary research analysis, the paper attempts to outline the diverse challenges that are being faced by Indian businesses while establishing the business case of sustainable responsibility. Some of the principal questions that paper addresses are: What are the SDG priorities for India Inc. as per their respective industry sectors? How can corporate policies imbibe the SDGs principles? How can the global concerns in form of SDGs align with the national CSR mandate and development issues? What initiatives have been undertaken by the companies to integrate their long term business strategy and sustainability? The paper will also reinstate an approach or a way forward that will enable businesses to proceed beyond compliance and accentuate the principles of responsibility and transparency within their operational framework.

Keywords: corporate social responsibility, CSR, India Inc., section 135, new companies act 2013, sustainable development goals, SDGs, sustainability, corporate governance

Procedia PDF Downloads 252
651 Carbon-Foam Supported Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Albert Mufundirwa, Satoru Yoshioka, K. Ogi, Takeharu Sugiyama, George F. Harrington, Bretislav Smid, Benjamin Cunning, Kazunari Sasaki, Akari Hayashi, Stephen M. Lyth

Abstract:

Polymer electrolyte membrane fuel cells (PEMFCs) are electrochemical energy conversion devices used for portable, residential and vehicular applications due to their low emissions, high efficiency, and quick start-up characteristics. However, PEMFCs generally use expensive, Pt-based electrocatalysts as electrode catalysts. Due to the high cost and limited availability of platinum, research and development to either drastically reduce platinum loading, or replace platinum with alternative catalysts is of paramount importance. A combination of high surface area supports and nano-structured active sites is essential for effective operation of catalysts. We synthesize carbon foam supports by thermal decomposition of sodium ethoxide, using a template-free, gram scale, cheap, and scalable pyrolysis method. This carbon foam has a high surface area, highly porous, three-dimensional framework which is ideal for electrochemical applications. These carbon foams can have surface area larger than 2500 m²/g, and electron microscopy reveals that they have micron-scale cells, separated by few-layer graphene-like carbon walls. We applied this carbon foam as a platinum catalyst support, resulting in the improved electrochemical surface area and mass activity for the oxygen reduction reaction (ORR), compared to carbon black. Similarly, silver-decorated carbon foams showed higher activity and efficiency for electrochemical carbon dioxide conversion than silver-decorated carbon black. A promising alternative to Pt-catalysts for the ORR is iron-impregnated nitrogen-doped carbon catalysts (Fe-N-C). Doping carbon with nitrogen alters the chemical structure and modulates the electronic properties, allowing a degree of control over the catalytic properties. We have adapted our synthesis method to produce nitrogen-doped carbon foams with large surface area, using triethanolamine as a nitrogen feedstock, in a novel bottom-up protocol. These foams are then infiltrated with iron acetate (FeAc) and pyrolysed to form Fe-N-C foams. The resulting Fe-N-C foam catalysts have high initial activity (half-wave potential of 0.68 VRHE), comparable to that of commercially available Pt-free catalysts (e.g., NPC-2000, Pajarito Powder) in acid solution. In alkaline solution, the Fe-N-C carbon foam catalysts have a half-wave potential of 0.89 VRHE, which is higher than that of NPC-2000 by almost 10 mVRHE, and far out-performing platinum. However, the durability is still a problem at present. The lessons learned from X-ray absorption spectroscopy (XAS), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), and electrochemical measurements will be used to carefully design Fe-N-C catalysts for higher performance PEMFCs.

Keywords: carbon-foam, polymer electrolyte membrane fuel cells, platinum, Pt-free, Fe-N-C, ORR

Procedia PDF Downloads 180
650 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft

Authors: Saurabh Sharma

Abstract:

Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.

Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete

Procedia PDF Downloads 66
649 Displaying Compostela: Literature, Tourism and Cultural Representation, a Cartographic Approach

Authors: Fernando Cabo Aseguinolaza, Víctor Bouzas Blanco, Alberto Martí Ezpeleta

Abstract:

Santiago de Compostela became a stable object of literary representation during the period between 1840 and 1915, approximately. This study offers a partial cartographical look at this process, suggesting that a cultural space like Compostela’s becoming an object of literary representation paralleled the first stages of its becoming a tourist destination. We use maps as a method of analysis to show the interaction between a corpus of novels and the emerging tradition of tourist guides on Compostela during the selected period. Often, the novels constitute ways to present a city to the outside, marking it for the gaze of others, as guidebooks do. That leads us to examine the ways of constructing and rendering communicable the local in other contexts. For that matter, we should also acknowledge the fact that a good number of the narratives in the corpus evoke the representation of the city through the figure of one who comes from elsewhere: a traveler, a student or a professor. The guidebooks coincide in this with the emerging fiction, of which the mimesis of a city is a key characteristic. The local cannot define itself except through a process of symbolic negotiation, in which recognition and self-recognition play important roles. Cartography shows some of the forms that these processes of symbolic representation take through the treatment of space. The research uses GIS to find significant models of representation. We used the program ArcGIS for the mapping, defining the databases starting from an adapted version of the methodology applied by Barbara Piatti and Lorenz Hurni’s team at the University of Zurich. First, we designed maps that emphasize the peripheral position of Compostela from a historical and institutional perspective using elements found in the texts of our corpus (novels and tourist guides). Second, other maps delve into the parallels between recurring techniques in the fictional texts and characteristic devices of the guidebooks (sketching itineraries and the selection of zones and indexicalization), like a foreigner’s visit guided by someone who knows the city or the description of one’s first entrance into the city’s premises. Last, we offer a cartography that demonstrates the connection between the best known of the novels in our corpus (Alejandro Pérez Lugín’s 1915 novel La casa de la Troya) and the first attempt to create package tourist tours with Galicia as a destination, in a joint venture of Galician and British business owners, in the years immediately preceding the Great War. Literary cartography becomes a crucial instrument for digging deeply into the methods of cultural production of places. Through maps, the interaction between discursive forms seemingly so far removed from each other as novels and tourist guides becomes obvious and suggests the need to go deeper into a complex process through which a city like Compostela becomes visible on the contemporary cultural horizon.

Keywords: compostela, literary geography, literary cartography, tourism

Procedia PDF Downloads 392
648 Synthesis of Functionalized-2-Aryl-2, 3-Dihydroquinoline-4(1H)-Ones via Fries Rearrangement of Azetidin-2-Ones

Authors: Parvesh Singh, Vipan Kumar, Vishu Mehra

Abstract:

Quinoline-4-ones represent an important class of heterocyclic scaffolds that have attracted significant interest due to their various biological and pharmacological activities. This heterocyclic unit also constitutes an integral component in drugs used for the treatment of neurodegenerative diseases, sleep disorders and in antibiotics viz. norfloxacin and ciprofloxacin. The synthetic accessibility and possibility of fictionalization at varied positions in quinoline-4-ones exemplifies an elegant platform for the designing of combinatorial libraries of functionally enriched scaffolds with a range of pharmacological profles. They are also considered to be attractive precursors for the synthesis of medicinally imperative molecules such as non-steroidal androgen receptor antagonists, antimalarial drug Chloroquine and martinellines with antibacterial activity. 2-Aryl-2,3-dihydroquinolin-4(1H)-ones are present in many natural and non-natural compounds and are considered to be the aza-analogs of favanones. The β-lactam class of antibiotics is generally recognized to be a cornerstone of human health care due to the unparalleled clinical efficacy and safety of this type of antibacterial compound. In addition to their biological relevance as potential antibiotics, β-lactams have also acquired a prominent place in organic chemistry as synthons and provide highly efficient routes to a variety of non-protein amino acids, such as oligopeptides, peptidomimetics, nitrogen-heterocycles, as well as biologically active natural and unnatural products of medicinal interest such as indolizidine alkaloids, paclitaxel, docetaxel, taxoids, cyptophycins, lankacidins, etc. A straight forward route toward the synthesis of quinoline-4-ones via the triflic acid assisted Fries rearrangement of N-aryl-βlactams has been reported by Tepe and co-workers. The ring expansion observed in this case was solely attributed to the inherent ring strain in β-lactam ring because -lactam failed to undergo rearrangement under reaction conditions. Theabovementioned protocol has been recently extended by our group for the synthesis of benzo[b]-azocinon-6-ones via a tandem Michael addition–Fries rearrangement of sorbyl anilides as well as for the single-pot synthesis of 2-aryl-quinolin-4(3H)-ones through the Fries rearrangement of 3-dienyl-βlactams. In continuation with our synthetic endeavours with the β-lactam ring and in view of the lack of convenient approaches for the synthesis of C-3 functionalized quinolin-4(1H)-ones, the present work describes the single-pot synthesis of C-3 functionalized quinolin-4(1H)-ones via the trific acid promoted Fries rearrangement of C-3 vinyl/isopropenyl substituted β-lactams. In addition, DFT calculations and MD simulations were performed to investigate the stability profles of synthetic compounds.

Keywords: dihydroquinoline, fries rearrangement, azetidin-2-ones, quinoline-4-ones

Procedia PDF Downloads 250