Search results for: easily identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4382

Search results for: easily identification

602 Patients in Opioid Maintenance Programs: Psychological Features that Predict Abstinence

Authors: Janaina Pereira, Barbara Gonzalez, Valentina Chitas, Teresa Molina

Abstract:

Intro: The positive impact of opioid maintenance programs on the health of heroin addicts, and on public health in general, has been widely recognized, namely on the prevalence reduction of infectious diseases as HIV, and on the social reintegration of this population. Nevertheless, a part of patients in these programs cannot remain heroin abstinent, or has relapses, during the treatment. Method: Thus, this cross-sectional research aims at analyzing the relation between a set of psychological and psychosocial variables, which have been associated with the onset of heroin use, and assess if they are also associated with absence of abstinence in participants in an opioid maintenance program. A total of 62 patients, aged between 26 and 58 years old (M= 40.87, DP= 7.39) with a time in opioid maintenance program between 1 and 10 years (M= 5.42, DP= 3.05), 77.4% male and 22.6% female, participated in this research. To assess the criterion variable (heroin use) we used the mean value of positive results in urine tests during the participation in the program, weighted according to the number of months in program. The predictor variables were the coping strategies, the dispositional sensation seeking, and the existence of Posttraumatic stress disorder (PTSD). Results: The results showed that only 33.87% of the patients were totally abstinent of heroin use since the beginning of the program, and the absence of abstinence, as the number of positive heroin tests, was primarily predicted by less proactive coping, and secondarily by a higher level of sensation seeking. 16.13% of the sample fulfilled diagnosis criteria for PTSD, and 67.74 % had at least one traumatic experience throughout their lives. The total of PTSD symptoms had a positive correlation with the number of physical health problems, and with the lack of professional occupation. These results have several implications for the clinical practice in this field, and we suggest the promotion of proactive coping strategies should integrate these opioid maintenance programs, as they represent the tendency to face future events as challenges and opportunities, being positively related to positive results on several fields. The early identification of PTSD in the participants, before entering the opioid maintenance programs, would be important as it is related to negative features that hinder social reintegration, Finally, to identify individuals with a sensation seeking profile would be relevant, not only because they face a higher risk of relapse, but also because the therapeutical approaches should not ignore this dispositional feature in the alternatives they propose to the patients.

Keywords: opioid maintenance programs, proactive coping, PTSD, sensation seeking

Procedia PDF Downloads 128
601 Developing a Framework for Assessing and Fostering the Sustainability of Manufacturing Companies

Authors: Ilaria Barletta, Mahesh Mani, Björn Johansson

Abstract:

The concept of sustainability encompasses economic, environmental, social and institutional considerations. Sustainable manufacturing (SM) is, therefore, a multi-faceted concept. It broadly implies the development and implementation of technologies, projects and initiatives that are concerned with the life cycle of products and services, and are able to bring positive impacts to the environment, company stakeholders and profitability. Because of this, achieving SM-related goals requires a holistic, life-cycle-thinking approach from manufacturing companies. Further, such an approach must rely on a logic of continuous improvement and ease of implementation in order to be effective. Currently, there exists in the academic literature no comprehensively structured frameworks that support manufacturing companies in the identification of the issues and the capabilities that can either hinder or foster sustainability. This scarcity of support extends to difficulties in obtaining quantifiable measurements in order to objectively evaluate solutions and programs and identify improvement areas within SM for standards conformance. To bridge this gap, this paper proposes the concept of a framework for assessing and continuously improving the sustainability of manufacturing companies. The framework addresses strategies and projects for SM and operates in three sequential phases: analysis of the issues, design of solutions and continuous improvement. A set of interviews, observations and questionnaires are the research methods to be used for the implementation of the framework. Different decision-support methods - either already-existing or novel ones - can be 'plugged into' each of the phases. These methods can assess anything from business capabilities to process maturity. In particular, the authors are working on the development of a sustainable manufacturing maturity model (SMMM) as decision support within the phase of 'continuous improvement'. The SMMM, inspired by previous maturity models, is made up of four maturity levels stemming from 'non-existing' to 'thriving'. Aggregate findings from the use of the framework should ultimately reveal to managers and CEOs the roadmap for achieving SM goals and identify the maturity of their companies’ processes and capabilities. Two cases from two manufacturing companies in Australia are currently being employed to develop and test the framework. The use of this framework will bring two main benefits: enable visual, intuitive internal sustainability benchmarking and raise awareness of improvement areas that lead companies towards an increasingly developed SM.

Keywords: life cycle management, continuous improvement, maturity model, sustainable manufacturing

Procedia PDF Downloads 266
600 Purple Spots on Historical Parchments: Confirming the Microbial Succession at the Basis of Biodeterioration

Authors: N. Perini, M. C. Thaller, F. Mercuri, S. Orlanducci, A. Rubechini, L. Migliore

Abstract:

The preservation of cultural heritage is one of the major challenges of today’s society, because of the fundamental right of future generations to inherit it as the continuity with their historical and cultural identity. Parchments, consisting of a semi-solid matrix of collagen produced from animal skin (i.e., sheep or goats), are a significant part of the cultural heritage, being used as writing material for many centuries. Due to their animal origin, parchments easily undergo biodeterioration. The most common biological damage is characterized by isolated or coalescent purple spots that often leads to the detachment of the superficial layer and the loss of the written historical content of the document. Although many parchments with the same biodegradative features were analyzed, no common causative agent has been found so far. Very recently, a study was performed on a purple-damaged parchment roll dated back 1244 A.D, the A.A. Arm. I-XVIII 3328, belonging to the oldest collection of the Vatican Secret Archive (Fondo 'Archivum Arcis'), by comparing uncolored undamaged and purple damaged areas of the same document. As a whole, the study gave interesting results to hypothesize a model of biodeterioration, consisting of a microbial succession acting in two main phases: the first one, common to all the damaged parchments, is characterized by halophilic and halotolerant bacteria fostered by the salty environment within the parchment maybe induced by bringing of the hides; the second one, changing with the individual history of each parchment, determines the identity of its colonizers. The design of this model was pivotal to this study, performed by different labs of the Tor Vergata University (Rome, Italy), in collaboration with the Vatican Secret Archive. Three documents, belonging to a collection of dramatically damaged parchments archived as 'Faldone Patrizi A 19' (dated back XVII century A.D.), were analyzed through a multidisciplinary approach, including three updated technologies: (i) Next Generation Sequencing (NGS, Illumina) to describe the microbial communities colonizing the damaged and undamaged areas, (ii) RAMAN spectroscopy to analyze the purple pigments, (iii) Light Transmitted Analysis (LTA) to evaluate the kind and entity of the damage to native collagen. The metagenomic analysis obtained from NGS revealed DNA sequences belonging to Halobacterium salinarum mainly in the undamaged areas. RAMAN spectroscopy detected pigments within the purple spots, mainly bacteriorhodopsine/rhodopsin-like pigments, a purple transmembrane protein containing retinal and present in Halobacteria. The LTA technique revealed extremely damaged collagen structures in both damaged and undamaged areas of the parchments. In the light of these data, the study represents a first confirmation of the microbial succession model described above. The demonstration of this model is pivotal to start any possible new restoration strategy to bring back historical parchments to their original beauty, but also to open opportunities for intervention on a huge amount of documents.

Keywords: biodeterioration, parchments, purple spots, ecological succession

Procedia PDF Downloads 171
599 Selection of Suitable Reference Genes for Assessing Endurance Related Traits in a Native Pony Breed of Zanskar at High Altitude

Authors: Prince Vivek, Vijay K. Bharti, Manishi Mukesh, Ankita Sharma, Om Prakash Chaurasia, Bhuvnesh Kumar

Abstract:

High performance of endurance in equid requires adaptive changes involving physio-biochemical, and molecular responses in an attempt to regain homeostasis. We hypothesized that the identification of the suitable reference genes might be considered for assessing of endurance related traits in pony at high altitude and may ensure for individuals struggling to potent endurance trait in ponies at high altitude. A total of 12 mares of ponies, Zanskar breed, were divided into three groups, group-A (without load), group-B, (60 Kg) and group-C (80 Kg) on backpack loads were subjected to a load carry protocol, on a steep climb of 4 km uphill, and of gravel, uneven rocky surface track at an altitude of 3292 m to 3500 m (endpoint). Blood was collected before and immediately after the load carry on sodium heparin anticoagulant, and the peripheral blood mononuclear cell was separated for total RNA isolation and thereafter cDNA synthesis. Real time-PCR reactions were carried out to evaluate the mRNAs expression profile of a panel of putative internal control genes (ICGs), related to different functional classes, namely glyceraldehyde 3-phosphate dehydrogenase (GAPDH), β₂ microglobulin (β₂M), β-actin (ACTB), ribosomal protein 18 (RS18), hypoxanthine-guanine phosophoribosyltransferase (HPRT), ubiquitin B (UBB), ribosomal protein L32 (RPL32), transferrin receptor protein (TFRC), succinate dehydrogenase complex subunit A (SDHA) for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of native pony’s. Three different algorithms, geNorm, NormFinder, and BestKeeper software, were used to evaluate the stability of reference genes. The result showed that GAPDH was best stable gene and stability value for the best combination of two genes was observed TFRC and β₂M. In conclusion, the geometric mean of GAPDH, TFRC and β₂M might be used for accurate normalization of transcriptional data for assessing endurance related traits in Zanskar ponies during load carrying.

Keywords: endurance exercise, ubiquitin B (UBB), β₂ microglobulin (β₂M), high altitude, Zanskar ponies, reference gene

Procedia PDF Downloads 131
598 Tumour-Associated Tissue Eosinophilia as a Prognosticator in Oral Squamous Cell Carcinoma

Authors: Karen Boaz, C. R. Charan

Abstract:

Background: The infiltration of tumour stroma by eosinophils, Tumor-Associated Tissue Eosinophilia (TATE), is known to modulate the progression of Oral Squamous Cell Carcinoma (OSCC). Eosinophils have direct tumoricidal activity by release of cytotoxic proteins and indirectly they enhance permeability into tumor cells enabling penetration of tumoricidal cytokines. Also, eosinophils may promote tumor angiogenesis by production of several angiogenic factors. Identification of eosinophils in the inflammatory stroma has been proven to be an important prognosticator in cancers of mouth, oesophagus, larynx, pharynx, breast, lung, and intestine. Therefore, the study aimed to correlate TATE with clinical and histopathological variables, and blood eosinophil count to assess the role of TATE as a prognosticator in Oral Squamous Cell Carcinoma (OSCC). Methods: Seventy two biopsy-proven cases of OSCC formed the study cohort. Blood eosinophil counts and TNM stage were obtained from the medical records. Tissue sections (5µm thick) were stained with Haematoxylin and Eosin. The eosinophils were quantified at invasive tumour front (ITF) in 10HPF (40x magnification) with an ocular grid. Bryne’s grading of ITF was also performed. A subset of thirty cases was also assessed for association of TATE with recurrence, involvement of lymph nodes and surgical margins. Results: 1) No statistically significant correlation was found between TATE and TNM stage, blood eosinophil counts and most parameters of Bryne’s grading system. 2) Statistically significant relation of intense degree of TATE was associated with the absence of distant metastasis, increased lympho-plasmacytic response and increased survival (diseasefree and overall) of OSCC patients. 3) In the subset of 30 cases, tissue eosinophil counts were higher in cases with lymph node involvement, decreased survival, without margin involvement and in cases that did not recur. Conclusion: While the role of eosinophils in mediating immune responses seems ambiguous as eosinophils support cell-mediated tumour immunity in early stages while inhibiting the same in advanced stages, TATE may be used as a surrogate marker for determination of prognosis in oral squamous cell carcinoma.

Keywords: tumour-associated tissue eosinophilia, oral squamous cell carcinoma, prognosticator, tumoral immunity

Procedia PDF Downloads 250
597 Corrosion Protective Coatings in Machines Design

Authors: Cristina Diaz, Lucia Perez, Simone Visigalli, Giuseppe Di Florio, Gonzalo Fuentes, Roberto Canziani, Paolo Gronchi

Abstract:

During the last 50 years, the selection of materials is one of the main decisions in machine design for different industrial applications. It is due to numerous physical, chemical, mechanical and technological factors to consider in it. Corrosion effects are related with all of these factors and impact in the life cycle, machine incidences and the costs for the life of the machine. Corrosion affects the deterioration or destruction of metals due to the reaction with the environment, generally wet. In food industry, dewatering industry, concrete industry, paper industry, etc. corrosion is an unsolved problem and it might introduce some alterations of some characteristics in the final product. Nowadays, depending on the selected metal, its surface and its environment of work, corrosion prevention might be a change of metal, use a coating, cathodic protection, use of corrosion inhibitors, etc. In the vast majority of the situations, use of a corrosion resistant material or in its defect, a corrosion protection coating is the solution. Stainless steels are widely used in machine design, because of their strength, easily cleaned capacity, corrosion resistance and appearance. Typical used are AISI 304 and AISI 316. However, their benefits don’t fit every application, and some coatings are required against corrosion such as some paintings, galvanizing, chrome plating, SiO₂, TiO₂ or ZrO₂ coatings, etc. In this work, some coatings based in a bilayer made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium or Titanium-Zirconium, have been developed used magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology, for trying to reduce corrosion effects on AISI 304, AISI 316 and comparing it with Titanium alloy substrates. Ti alloy display exceptional corrosion resistance to chlorides, sour and oxidising acidic media and seawater. In this study, Ti alloy (99%) has been included for comparison with coated AISI 304 and AISI 316 stainless steel. Corrosion tests were conducted by a Gamry Instrument under ASTM G5-94 standard, using different electrolytes such as tomato salsa, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl for testing corrosion in different industrial environments. In general, in all tested environments, the results showed an improvement of corrosion resistance of all coated AISI 304 and AISI 316 stainless steel substrates when they were compared to uncoated stainless steel substrates. After that, comparing these results with corrosion studies on uncoated Ti alloy substrate, it was observed that in some cases, coated stainless steel substrates, reached similar current density that uncoated Ti alloy. Moreover, Titanium-Zirconium and Titanium-Tantalum coatings showed for all substrates in study including coated Ti alloy substrates, a reduction in current density more than two order in magnitude. As conclusion, Ti-Ta, Ti-Zr, Ti-Nb and Ti-Hf coatings have been developed for improving corrosion resistance of AISI 304 and AISI 316 materials. After corrosion tests in several industry environments, substrates have shown improvements on corrosion resistance. Similar processes have been carried out in Ti alloy (99%) substrates. Coated AISI 304 and AISI 316 stainless steel, might reach similar corrosion protection on the surface than uncoated Ti alloy (99%). Moreover, coated Ti Alloy (99%) might increase its corrosion resistance using these coatings.

Keywords: coatings, corrosion, PVD, stainless steel

Procedia PDF Downloads 158
596 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context

Authors: Rit M., Girard R., Villot J., Thorel M.

Abstract:

In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.

Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology

Procedia PDF Downloads 68
595 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin

Authors: Parisa Shirzadeh

Abstract:

Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.

Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin

Procedia PDF Downloads 120
594 On the Semantics and Pragmatics of 'Be Able To': Modality and Actualisation

Authors: Benoît Leclercq, Ilse Depraetere

Abstract:

The goal of this presentation is to shed new light on the semantics and pragmatics of be able to. It presents the results of a corpus analysis based on data from the BNC (British National Corpus), and discusses these results in light of a specific stance on the semantics-pragmatics interface taking into account recent developments. Be able to is often discussed in relation to can and could, all of which can be used to express ability. Such an onomasiological approach often results in the identification of usage constraints for each expression. In the case of be able to, it is the formal properties of the modal expression (unlike can and could, be able to has non-finite forms) that are in the foreground, and the modal expression is described as the verb that conveys future ability. Be able to is also argued to expressed actualised ability in the past (I was able/could to open the door). This presentation aims to provide a more accurate pragmatic-semantic profile of be able to, based on extensive data analysis and one that is embedded in a very explicit view on the semantics-pragmatics interface. A random sample of 3000 examples (1000 for each modal verb) extracted from the BNC was analysed to account for the following issues. First, the challenge is to identify the exact semantic range of be able to. The results show that, contrary to general assumption, be able to does not only express ability but it shares most of the root meanings usually associated with the possibility modals can and could. The data reveal that what is called opportunity is, in fact, the most frequent meaning of be able to. Second, attention will be given to the notion of actualisation. It is commonly argued that be able to is the preferred form when the residue actualises: (1) The only reason he was able to do that was because of the restriction (BNC, spoken) (2) It is only through my imaginative shuffling of the aces that we are able to stay ahead of the pack. (BNC, written) Although this notion has been studied in detail within formal semantic approaches, empirical data is crucially lacking and it is unclear whether actualisation constitutes a conventional (and distinguishing) property of be able to. The empirical analysis provides solid evidence that actualisation is indeed a conventional feature of the modal. Furthermore, the dataset reveals that be able to expresses actualised 'opportunities' and not actualised 'abilities'. In the final part of this paper, attention will be given to the theoretical implications of the empirical findings, and in particular to the following paradox: how can the same expression encode both modal meaning (non-factual) and actualisation (factual)? It will be argued that this largely depends on one's conception of the semantics-pragmatics interface, and that this need not be an issue when actualisation (unlike modality) is analysed as a generalised conversational implicature and thus is considered part of the conventional pragmatic layer of be able to.

Keywords: Actualisation, Modality, Pragmatics, Semantics

Procedia PDF Downloads 132
593 WhatsApp as a Public Health Management Tool in India

Authors: Drishti Sharma, Mona Duggal

Abstract:

Background: WhatsApp can serve as a cost-effective, scalable, convenient, and popular medium for public health management related communication in the developing world where the existing system of communication is top-down and slow. The product supports sending and receiving a variety of media: text, photos, videos, documents, and location, as well as voice/video calls. With growing number of users of smartphones and improving access and penetration of internet, the scope of information technology remains immense in resolving the hurdles faced by traditional public health system. Poor infrastructure, gap in digital literacy, faulty documentation, strict organizational hierarchy and slow movement of information across desks and offices- all these, make WhatsApp an efficient prospect to complement the existing system for communication, feedback and leadership for public health system in India. Objective: This study investigates the benefits, challenges and limitations associated with WhatsApp usage as a public health management tool. Methods: The study was conducted within the Chandigarh Union Territory. We used a qualitative approach and conducted individual semi-structured interviews and group interviews (n = 10). Participants included medical officers (n 20), Program managers (n = 4), academicians (n=2) and administrators (n=2). Thematic and content qualitative analyses were conducted. Message log of the WhatsApp group of one of the health program was assessed. Results: Medical Officers said that WhatsApp helped them remain in touch with the program officer. They could easily give feedback and highlight those challenges which needed immediate intervention from the program managers, hence they felt supported. Also, the application helped them share pictures of their activities (meetings and field activities) with the group which they thought inspired others and gave themselves immense satisfaction. Also, it helped build stronger relationships and better coordination among themselves, the same being important in team events. For program managers, it had become a portal for coordinating large scale campaigns. Its reach and the fact that the feedback is real-time make WhatsApp ideal for district level events. Though the easy informal connectivity made them answerable to their staff but it also provided them with flexibility in operations. It turned out to be an important portal for sharing outcome and goals related feedback (both positive and negative) to the team. To be sure, using WhatsApp for the purpose of public health program presents considerable challenges, including technological barriers, organizational challenges, gender issues, confidentiality concerns and unplanned aftereffects. Nevertheless, its advantages in a low-cost setting make it an efficient alternative. Conclusion: WhatsApp has become an integral part of our lives. Use of this app for public health program management within closed groups looks promising and useful. At the same time, addressing the challenges involved would make its usage safer.

Keywords: communication, mobile technology, public health management, WhatsApp

Procedia PDF Downloads 177
592 All-In-One Universal Cartridge Based Truly Modular Electrolyte Analyzer

Authors: S. Dalvi, N. Sane, V. Patil, D. Bansode, A. Tharakan, V. Mathur

Abstract:

Measurement of routine clinical electrolyte tests is common in labs worldwide for screening of illness or diseases. All the analyzers for the measurement of electrolyte parameters have sensors, reagents, sampler, pump tubing, valve, other tubing’s separate that are either expensive, require heavy maintenance and have a short shelf-life. Moreover, the costs required to maintain such Lab instrumentation is high and this limits the use of the device to only highly specialized personnel and sophisticated labs. In order to provide Healthcare Diagnostics to ALL at affordable costs, there is a need for an All-in-one Universal Modular Cartridge that contains sensors, reagents, sampler, valve, pump tubing, and other tubing’s in one single integrated module-in-module cartridge that is affordable, reliable, easy-to-use, requires very low sample volume and is truly modular and maintenance-free. DiaSys India has developed a World’s first, Patent Pending, Versatile All-in-one Universal Module-in-Module Cartridge based Electrolyte Analyzer (QDx InstaLyte) that can perform sodium, potassium, chloride, calcium, pH, lithium tests. QDx InstaLyte incorporates High Performance, Inexpensive All-in-one Universal Cartridge for rapid quantitative measurement of electrolytes in body fluids. Our proposed methodology utilizes Advanced & Improved long life ISE sensors to provide a sensitive and accurate result in 120 sec with just 100 µl of sample volume. The All-in-One Universal Cartridge has a very low reagent consumption capable of maximum of 1000 tests with a Use-life of 3-4 months and a long Shelf life of 12-18 months at 4-25°C making it very cost-effective. Methods: QDx InstaLyte analyzers with All-in-one Universal Modular Cartridges were independently evaluated with three R&D lots for Method Performance (Linearity, Precision, Method Comparison, Cartridge Stability) to measure Sodium, Potassium, Chloride. Method Comparison was done against Medica EasyLyte Plus Na/K/Cl Electrolyte Analyzer, a mid-size lab based clinical chemistry analyzer with N = 100 samples run over 10 days. Within-run precision study was done using modified CLSI guidelines with N = 20 samples and day-to-day precision study was done for 7 consecutive days using Trulab N & P Quality Control Samples. Accelerated stability testing was done at 45oC for 4 weeks with Production Lots. Results: Data analysis indicates that the CV for within-run precision for Na is ≤ 1%, for K is ≤2%, and for Cl is ≤2% and with R2 ≥ 0.95 for Method Comparison. Further, the All-in-One Universal Cartridge is stable up to 12-18 months at 4-25oC storage temperature based on preliminary extrapolated data. Conclusion: The Developed Technology Platform of All-in-One Universal Module-in-Module Cartridge based QDx InstaLyte is Reliable and meets all the performance specifications of the lab and is Truly Modular and Maintenance-Free. Hence, it can be easily adapted for low cost, sensitive and rapid measurement of electrolyte tests in low resource settings such as in urban, semi-urban and rural areas in the developing countries and can be used as a Point-of-care testing system for worldwide applications.

Keywords: all-in-one modular catridge, electrolytes, maintenance free, QDx instalyte

Procedia PDF Downloads 31
591 Sustainability in Space: Implementation of Circular Economy and Material Efficiency Strategies in Space Missions

Authors: Hamda M. Al-Ali

Abstract:

The ultimate aim of space exploration has been centralized around the possibility of life on other planets in the solar system. This aim is driven by the detrimental effects that climate change could potentially have on human survival on Earth in the future. This drives humans to search for feasible solutions to increase environmental and economical sustainability on Earth and to evaluate and explore the ability of human survival on other planets such as Mars. To do that, frequent space missions are required to meet the ambitious human goals. This means that reliable and affordable access to space is required, which could be largely achieved through the use of reusable spacecrafts. Therefore, materials and resources must be used wisely to meet the increasing demand. Space missions are currently extremely expensive to operate. However, reusing materials hence spacecrafts, can potentially reduce overall mission costs as well as the negative impact on both space and Earth environments. This is because reusing materials leads to less waste generated per mission, and therefore fewer landfill sites are required. Reusing materials reduces resource consumption, material production, and the need for processing new and replacement spacecraft and launch vehicle parts. Consequently, this will ease and facilitate human access to outer space as it will reduce the demand for scarce resources, which will boost material efficiency in the space industry. Material efficiency expresses the extent to which resources are consumed in the production cycle and how the waste produced by the industrial process is minimized. The strategies proposed in this paper to boost material efficiency in the space sector are the introduction of key performance indicators that are able to measure material efficiency as well as the introduction of clearly defined policies and legislation that can be easily implemented within the general practices in the space industry. Another strategy to improve material efficiency is by amplifying energy and resource efficiency through reusing materials. The circularity of various spacecraft materials such as Kevlar, steel, and aluminum alloys could be maximized through reusing them directly or after galvanizing them with another layer of material to act as a protective coat. This research paper has an aim to investigate and discuss how to improve material efficiency in space missions considering circular economy concepts so that space and Earth become more economically and environmentally sustainable. The circular economy is a transition from a make-use-waste linear model to a closed-loop socio-economic model, which is regenerative and restorative in nature. The implementation of a circular economy will reduce waste and pollution through maximizing material efficiency, ensuring that businesses can thrive and sustain. Further research into the extent to which reusable launch vehicles reduce space mission costs have been discussed, along with the environmental and economic implications it could have on the space sector and the environment. This has been examined through research and in-depth literature review of published reports, books, scientific articles, and journals. Keywords such as material efficiency, circular economy, reusable launch vehicles and spacecraft materials were used to search for relevant literature.

Keywords: circular economy, key performance indicator, material efficiency, reusable launch vehicles, spacecraft materials

Procedia PDF Downloads 125
590 Cognitive Translation and Conceptual Wine Tasting Metaphors: A Corpus-Based Research

Authors: Christine Demaecker

Abstract:

Many researchers have underlined the importance of metaphors in specialised language. Their use of specific domains helps us understand the conceptualisations used to communicate new ideas or difficult topics. Within the wide area of specialised discourse, wine tasting is a very specific example because it is almost exclusively metaphoric. Wine tasting metaphors express various conceptualisations. They are not linguistic but rather conceptual, as defined by Lakoff & Johnson. They correspond to the linguistic expression of a mental projection from a well-known or more concrete source domain onto the target domain, which is the taste of wine. But unlike most specialised terminologies, the vocabulary is never clearly defined. When metaphorical terms are listed in dictionaries, their definitions remain vague, unclear, and circular. They cannot be replaced by literal linguistic expressions. This makes it impossible to transfer them into another language with the traditional linguistic translation methods. Qualitative research investigates whether wine tasting metaphors could rather be translated with the cognitive translation process, as well described by Nili Mandelblit (1995). The research is based on a corpus compiled from two high-profile wine guides; the Parker’s Wine Buyer’s Guide and its translation into French and the Guide Hachette des Vins and its translation into English. In this small corpus with a total of 68,826 words, 170 metaphoric expressions have been identified in the original English text and 180 in the original French text. They have been selected with the MIPVU Metaphor Identification Procedure developed at the Vrije Universiteit Amsterdam. The selection demonstrates that both languages use the same set of conceptualisations, which are often combined in wine tasting notes, creating conceptual integrations or blends. The comparison of expressions in the source and target texts also demonstrates the use of the cognitive translation approach. In accordance with the principle of relevance, the translation always uses target language conceptualisations, but compared to the original, the highlighting of the projection is often different. Also, when original metaphors are complex with a combination of conceptualisations, at least one element of the original metaphor underlies the target expression. This approach perfectly integrates into Lederer’s interpretative model of translation (2006). In this triangular model, the transfer of conceptualisation could be included at the level of ‘deverbalisation/reverbalisation’, the crucial stage of the model, where the extraction of meaning combines with the encyclopedic background to generate the target text.

Keywords: cognitive translation, conceptual integration, conceptual metaphor, interpretative model of translation, wine tasting metaphor

Procedia PDF Downloads 131
589 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)

Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy

Abstract:

The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.

Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT

Procedia PDF Downloads 424
588 Six Years Antimicrobial Resistance Trends among Bacterial Isolates in Amhara National Regional State, Ethiopia

Authors: Asrat Agalu Abejew

Abstract:

Background: Antimicrobial resistance (AMR) is a silent tsunami and one of the top global threats to health care and public health. It is one of the common agendas globally and in Ethiopia. Emerging AMR will be a double burden to Ethiopia, which is facing a series of problems from infectious disease morbidity and mortality. In Ethiopia, although there are attempts to document AMR in healthcare institutions, comprehensive and all-inclusive analysis is still lacking. Thus, this study is aimed to determine trends in AMR from 2016-2021. Methods: A retrospective analysis of secondary data recorded in the Amhara Public Health Institute (APHI) from 2016 to 2021 G.C was conducted. Blood, Urine, Stool, Swabs, Discharge, body effusions, and other Microbiological specimens were collected from each study participants, and Bacteria identification and Resistance tests were done using the standard microbiologic procedure. Data was extracted from excel in August 2022, Trends in AMR were analyzed, and the results were described. In addition, the chi-square (X2) test and binary logistic regression were used, and a P. value < 0.05 was used to determine a significant association. Results: During 6 years period, there were 25143 culture and susceptibility tests. Overall, 265 (46.2%) bacteria were resistant to 2-4 antibiotics, 253 (44.2%) to 5-7 antibiotics, and 56 (9.7%) to >=8 antibiotics. The gram-negative bacteria were 166 (43.9%), 155 (41.5%), and 55 (14.6%) resistant to 2-4, 5-7, and ≥8 antibiotics, respectively, whereas 99(50.8%), 96(49.2% and 1 (0.5%) of gram-positive bacteria were resistant to 2-4, 5-7 and ≥8 antibiotics respectively. K. pneumonia 3783 (15.67%) and E. coli 3199 (13.25%) were the most commonly isolated bacteria, and the overall prevalence of AMR was 2605 (59.9%), where K. pneumonia 743 (80.24%), E. cloacae 196 (74.81%), A. baumannii 213 (66.56%) being the most common resistant bacteria for antibiotics tested. Except for a slight decline during 2020 (6469 (25.4%)), the overall trend of AMR is rising from year to year, with a peak in 2019 (8480 (33.7%)) and in 2021 (7508 (29.9%). If left un-intervened, the trend in AMR will increase by 78% of variation from the study period, as explained by the differences in years (R2=0.7799). Ampicillin, Augmentin, ciprofloxacin, cotrimoxazole, tetracycline, and Tobramycin were almost resistant to common bacteria they were tested. Conclusion: AMR is linearly increasing during the last 6 years. If left as it is without appropriate intervention after 15 years (2030 E.C), AMR will increase by 338.7%. A growing number of multi-drug resistant bacteria is an alarm to awake policymakers and those who do have the concern to intervene before it is too late. This calls for a periodic, integrated, and continuous system to determine the prevalence of AMR in commonly used antibiotics.

Keywords: AMR, trend, pattern, MDR

Procedia PDF Downloads 76
587 The Role of the Renal Specialist Podiatrist

Authors: Clara Luwe, Oliver Harness, Helena Meally, Kim Martin, Alexandra Harrington

Abstract:

Background: The role of ‘Renal Specialist Podiatrist’ originated in 2022 due to prevailing evidence of patients with diabetes and end-stage renal disease (ESRD) on haemodialysis (HD) and active ulcerations that were at higher risk of rapid deterioration, foot-related hospital admissions, and lower limb amputations. This role started in April 2022 with the aim of screening all patients on haemodialysis and instigating preventative measures to reduce serious foot related complications. Methods: A comprehensive neurovascular foot assessment was completed to establish baseline vascular status and identify those with peripheral arterial disease (PAD) for all patients on HD. Individual’s foot risk was stratified, advice and education tailored and issued. Identifying all diabetes patients on HD as high-risk for diabetic foot complications. Major Findings: All patients screened revealed over half of the caseload had diabetes, and more than half had a clinical presentation of PAD. All those presenting with ulcerations had a diagnosis of diabetes. Of the presenting ulcerations, the majority of these ulcers predated the renal specialist post and were classified as severe >3 SINBAD Score. Since April’22, complications have been identified quicker, reducing the severity (SINBAD<3 or below), and have improved healing times, in line with the national average. During the eight months of the role being in place, we have seen a reduction in minor amputations and no major amputations. Conclusion: By screening all patients on haemodialysis and focusing on education, early recognition of complications, appropriate treatment, and timely onward referral, we can reduce the risk of foot Diabetic foot ulcerations and lower limb amputations. Having regular podiatry input to stratify and facilitate high-risk, active wound patients across different services has helped to keep these patients stable, prevent amputations, and reduce foot-related hospital admissions and mortality from foot-related disease. By improving the accessibility to a specialist podiatrist, patients felt able to raise concerns sooner. This has helped to implement treatment at the earliest possible opportunity, enabling the identification and healing of ulcers at an earlier and less complex stage (SINBAD <3), thus, preventing potential limb-threatening complications.

Keywords: renal, podiatry, haemodialysis, prevention, early detection

Procedia PDF Downloads 85
586 Identification of 332G>A Polymorphism in Exon 3 of the Leptin Gene and Partially Effects on Body Size and Tail Dimension in Sanjabi Sheep

Authors: Roya Bakhtiar, Alireza Abdolmohammadi, Hadi Hajarian, Zahra Nikousefat, Davood, Kalantar-Neyestanaki

Abstract:

The objective of the present study was to determine the polymorphism in the leptin (332G>A) and its association with biometric traits in Sanjabi sheep. For this purpose, blood samples from 96 rams were taken, and tail length, width tail, circumference tail, body length, body width, and height were simultaneously recorded. PCR was performed using specific primer to amplify 463 bp fragment including exon 3 of leptin gene, and PCR products were digested by Cail restriction enzymes. The 332G>A (at 332th nucleotide of exon 3 leptin gene) that caused an amino acid change from Arg to Gln was detected by Cail (CAGNNNCTG) endonuclease, as the endonuclease cannot cut this region if G nucleotide is located in this position. Three genotypes including GG (463), GA (463, 360and 103 bp) and GG (360 bp and 103 bp) were identified after digestion by enzyme. The estimated frequencies of three genotypes including GG, GA, and AA for 332G>A locus were 0.68, 0.29 and 0.03 and those were 0.18 and 0.82 for A and G alleles, respectively. In the current study, chi-square test indicated that 332G>A positions did not deviate from the Hardy–Weinberg (HW) equilibrium. The most important reason to show HW equation was that samples used in this study belong to three large local herds with a traditional breeding system having random mating and without selection. Shannon index amount was calculated which represent an average genetic variation in Sanjabi rams. Also, heterozygosity estimated by Nei index indicated that genetic diversity of mutation in the leptin gene is moderate. Leptin gene polymorphism in the 332G>A had significant effect on body length (P<0.05) trait, and individuals with GA genotype had significantly the higher body length compared to other individuals. Although animals with GA genotype had higher body width, this difference was not statistically significant (P>0.05). This non-synonymous SNP resulted in different amino acid changes at codon positions111(R/Q). As leptin activity is localized, at least in part, in domains between amino acid residues 106-1406, it is speculated that the detected SNP at position 332 may affect the activity of leptin and may lead to different biological functions. Based to our results, due to significant effect of leptin gene polymorphism on body size traits, this gene may be used a candidate gene for improving these traits.

Keywords: body size, Leptin gene, PCR-RFLP, Sanjabi sheep

Procedia PDF Downloads 341
585 In vitro Callus Production from Lantana Camara: A Step towards Biotransformation Studies

Authors: Maged El-Sayed Mohamed

Abstract:

Plant tissue culture practices are presented nowadays as the most promising substitute to a whole plant in the terms of secondary metabolites production. They offer the advantages of high production, tunability and they have less effect on plant ecosystems. Lantana camara is a weed, which is common all over the world as an ornamental plant. Weeds can adapt to any type of soil and climate due to their rich cellular machinery for secondary metabolites’ production. This characteristic is found in Lantana camara as a plant of very rich diversity of secondary metabolites with no dominant class of compounds. Aim: This trait has encouraged the author to develop tissue culture experiments for Lantana camara to be a platform for production and manipulation of secondary metabolites through biotransformation. Methodology: The plant was collected in its flowering stage in September 2014, from which explants were prepared from shoot tip, auxiliary bud and leaf. Different types of culture media were tried as well as four phytohormones and their combinations; NAA, 2,4-D, BAP and kinetin. Explants were grown in dark or in 12 hours dark and light cycles at 25°C. A metabolic profile for the produced callus was made and then compared to the whole plant profile. The metabolic profile was made using GC-MS for volatile constituents (extracted by n-hexane) and by HPLC-MS and capillary electrophoresis-mass spectrometry (CE-MS) for non-volatile constituents (extracted by ethanol and water). Results: The best conditions for the callus induction was achieved using MS media supplied with 30 gm sucrose and NAA/BAP (1:0.2 mg/L). Initiation of callus was favoured by incubation in dark for 20 day. The callus produced under these conditions showed yellow colour, which changed to brownish after 30 days. The rate of callus growth was high, expressed in the callus diameter, which reached to 1.15±0.2 cm in 30 days; however, the induction of callus delayed for 15 days. The metabolic profile for both volatile and non-volatile constituents of callus showed more simple background metabolites than the whole plant with two new (unresolved) peaks in the callus’ nonvolatile constituents’ chromatogram. Conclusion: Lantana camara callus production can be itself a source of new secondary metabolites and could be used for biotransformation studies due to its simple metabolic background, which allow easy identification of newly formed metabolites. The callus production gathered the simple metabolic background with the rich cellular secondary metabolite machinery of the plant, which could be elicited to produce valuable medicinally active products.

Keywords: capillary electrophoresis-mass spectrometry, gas chromatography, metabolic profile, plant tissue culture

Procedia PDF Downloads 386
584 Feasibility Study and Experiment of On-Site Nuclear Material Identification in Fukushima Daiichi Fuel Debris by Compact Neutron Source

Authors: Yudhitya Kusumawati, Yuki Mitsuya, Tomooki Shiba, Mitsuru Uesaka

Abstract:

After the Fukushima Daiichi nuclear power reactor incident, there are a lot of unaccountable nuclear fuel debris in the reactor core area, which is subject to safeguard and criticality safety. Before the actual precise analysis is performed, preliminary on-site screening and mapping of nuclear debris activity need to be performed to provide a reliable data on the nuclear debris mass-extraction planning. Through a collaboration project with Japan Atomic Energy Agency, an on-site nuclear debris screening system by using dual energy X-Ray inspection and neutron energy resonance analysis has been established. By using the compact and mobile pulsed neutron source constructed from 3.95 MeV X-Band electron linac, coupled with Tungsten as electron-to-photon converter and Beryllium as a photon-to-neutron converter, short-distance neutron Time of Flight measurement can be performed. Experiment result shows this system can measure neutron energy spectrum up to 100 eV range with only 2.5 meters Time of Flightpath in regards to the X-Band accelerator’s short pulse. With this, on-site neutron Time of Flight measurement can be used to identify the nuclear debris isotope contents through Neutron Resonance Transmission Analysis (NRTA). Some preliminary NRTA experiments have been done with Tungsten sample as dummy nuclear debris material, which isotopes Tungsten-186 has close energy absorption value with Uranium-238 (15 eV). The results obtained shows that this system can detect energy absorption in the resonance neutron area within 1-100 eV. It can also detect multiple elements in a material at once with the experiment using a combined sample of Indium, Tantalum, and silver makes it feasible to identify debris containing mixed material. This compact neutron Time of Flight measurement system is a great complementary for dual energy X-Ray Computed Tomography (CT) method that can identify atomic number quantitatively but with 1-mm spatial resolution and high error bar. The combination of these two measurement methods will able to perform on-site nuclear debris screening at Fukushima Daiichi reactor core area, providing the data for nuclear debris activity mapping.

Keywords: neutron source, neutron resonance, nuclear debris, time of flight

Procedia PDF Downloads 238
583 Sedimentary, Diagenesis and Evaluation of High Quality Reservoir of Coarse Clastic Rocks in Nearshore Deep Waters in the Dongying Sag; Bohai Bay Basin

Authors: Kouassi Louis Kra

Abstract:

The nearshore deep-water gravity flow deposits in the Northern steep slope of Dongying depression, Bohai Bay basin, have been acknowledged as important reservoirs in the rift lacustrine basin. These deep strata term as coarse clastic sediment, deposit at the root of the slope have complex depositional processes and involve wide diagenetic events which made high-quality reservoir prediction to be complex. Based on the integrated study of seismic interpretation, sedimentary analysis, petrography, cores samples, wireline logging data, 3D seismic and lithological data, the reservoir formation mechanism deciphered. The Geoframe software was used to analyze 3-D seismic data to interpret the stratigraphy and build a sequence stratigraphic framework. Thin section identification, point counts were performed to assess the reservoir characteristics. The software PetroMod 1D of Schlumberger was utilized for the simulation of burial history. CL and SEM analysis were performed to reveal diagenesis sequences. Backscattered electron (BSE) images were recorded for definition of the textural relationships between diagenetic phases. The result showed that the nearshore steep slope deposits mainly consist of conglomerate, gravel sandstone, pebbly sandstone and fine sandstone interbedded with mudstone. The reservoir is characterized by low-porosity and ultra-low permeability. The diagenesis reactions include compaction, precipitation of calcite, dolomite, kaolinite, quartz cement and dissolution of feldspars and rock fragment. The main types of reservoir space are primary intergranular pores, residual intergranular pores, intergranular dissolved pores, intergranular dissolved pores, and fractures. There are three obvious anomalous high-porosity zones in the reservoir. Overpressure and early hydrocarbon filling are the main reason for abnormal secondary pores development. Sedimentary facies control the formation of high-quality reservoir, oil and gas filling preserves secondary pores from late carbonate cementation.

Keywords: Bohai Bay, Dongying Sag, deep strata, formation mechanism, high-quality reservoir

Procedia PDF Downloads 135
582 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv

Authors: Md. Samsuddin Ansari, Ashish Arora

Abstract:

Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.

Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction

Procedia PDF Downloads 121
581 The Role of Group Interaction and Managers’ Risk-willingness for Business Model Innovation Decisions: A Thematic Analysis

Authors: Sarah Müller-Sägebrecht

Abstract:

Today’s volatile environment challenges executives to make the right strategic decisions to gain sustainable success. Entrepreneurship scholars postulate mainly positive effects of environmental changes on entrepreneurship behavior, such as developing new business opportunities, promoting ingenuity, and the satisfaction of resource voids. A strategic solution approach to overcome threatening environmental changes and catch new business opportunities is business model innovation (BMI). Although this research stream has gained further importance in the last decade, BMI research is still insufficient. Especially BMI barriers, such as inefficient strategic decision-making processes, need to be identified. Strategic decisions strongly impact organizational future and are, therefore, usually made in groups. Although groups draw on a more extensive information base than single individuals, group-interaction effects can influence the decision-making process - in a favorable but also unfavorable way. Decisions are characterized by uncertainty and risk, whereby their intensity is perceived individually differently. The individual risk-willingness influences which option humans choose. The special nature of strategic decisions, such as in BMI processes, is that these decisions are not made individually but in groups due to their high organizational scope. These groups consist of different personalities whose individual risk-willingness can vary considerably. It is known from group decision theory that these individuals influence each other, observable in different group-interaction effects. The following research questions arise: i) How does group interaction shape BMI decision-making from managers’ perspective? ii) What are the potential interrelations among managers’ risk-willingness, group biases, and BMI decision-making? After conducting 26 in-depth interviews with executives from the manufacturing industry, applied Gioia methodology reveals the following results: i) Risk-averse decision-makers have an increased need to be guided by facts. The more information available to them, the lower they perceive uncertainty and the more willing they are to pursue a specific decision option. However, the results also show that social interaction does not change the individual risk-willingness in the decision-making process. ii) Generally, it could be observed that during BMI decisions, group interaction is primarily beneficial to increase the group’s information base for making good decisions, less than for social interaction. Further, decision-makers mainly focus on information available to all decision-makers in the team but less on personal knowledge. This work contributes to strategic decision-making literature twofold. First, it gives insights into how group-interaction effects influence an organization’s strategic BMI decision-making. Second, it enriches risk-management research by highlighting how individual risk-willingness impacts organizational strategic decision-making. To date, it was known in BMI research that risk aversion would be an internal BMI barrier. However, with this study, it becomes clear that it is not risk aversion that inhibits BMI. Instead, the lack of information prevents risk-averse decision-makers from choosing a riskier option. Simultaneously, results show that risk-averse decision-makers are not easily carried away by the higher risk-willingness of their team members. Instead, they use social interaction to gather missing information. Therefore, executives need to provide sufficient information to all decision-makers to catch promising business opportunities.

Keywords: business model innovation, cognitive biases, group-interaction effects, strategic decision-making, risk-willingness

Procedia PDF Downloads 78
580 Haematological Correlates of Ischemic Stroke and Transient Ischemic Attack: Lessons Learned

Authors: Himali Gunasekara, Baddika Jayaratne

Abstract:

Haematological abnormalities are known to cause Ischemic Stroke or Transient Ischemic Attack (TIA). The identification of haematological correlates plays an important role in a management and secondary prevention. The objective of this study was to describe haematological correlates of stroke and their association between stroke profile. The haematological correlates screened were Lupus Anticoagulant, Dysfibroginemia, Paroxysmal nocturnal haemoglobinurea (PNH), Sickle cell disease, Systemic Lupus Erythematosis (SLE) and Myeloploriferative Neoplasms (MPN). A cross sectional descriptive study was conducted in a sample of 152 stroke patients referred to haematology department of National Hospital of Sri Lanka for thrombophilia screening. Different tests were performed to assess each hematological correlate. Diluted Russels Viper Venom Test and Kaolin clotting time were done to assess Lupus anticoagulant. Full blood count (FBC), blood picture, Sickling test and High Performance Liquid Chromatography were the tests used for detection of Sickle cell disease. Paroxysmal nocturnal haemoglobinurea was assessed by FBC, blood picture, Ham test and Flowcytometry. FBC, blood picture, Janus Kinase 2 (V617F) mutation analysis, erythropoietin level and bone marrow examination were done to look for the Myeloproliferative neoplasms. Dysfibrinogenaemia was assessed by TT, fibrinogen antigen test, clot observation and clauss test. Anti nuclear antibody test was done to look for systemic lupus erythematosis. Among study sample, 134 patients had strokes and only 18 had TIA. The recurrence of stroke/TIA was observed in 13.2% of patients. The majority of patients (94.7%) have had radiological evidence of thrombotic event. One fourth of patients had past thrombotic events while 12.5% had family history of thrombosis. Out of haematological correlates screened, Lupus anticoagulant was the commonest haematological correlate (n=16 ) and dysfibrigonaemia(n=11 ) had the next high prevalence. One patient was diagnosed with Essential thrombocythaemia and one with SLE. None of the patients were positive for screening tests done for sickle cell disease and PNH. The Haematological correlates were identified in 19% of our study sample. Among stroke profile only presence of past thrombotic history was statistically significantly associated with haematological disorders (P= 0.04). Therefore, hematological disorders appear to be an important factor in etiological work-up of stroke patients particularly in patients with past thrombotic events.

Keywords: stroke, transient ischemic attack, hematological correlates, hematological disorders

Procedia PDF Downloads 236
579 Coping Strategies Used by Persons with Spinal Cord Injury: A Rehabilitation Hospital Based Qualitative Study

Authors: P. W. G. D. P. Samarasekara, S. M. K. S. Seneviratne, D. Munidasa, S. S. Williams

Abstract:

Sustaining a spinal cord injury (SCI) causes severe disruption of all aspects of a person’s life, resulting in the difficult process of coping with the distressing effects of paralysis affecting their ability to lead a meaningful life. These persons are hospitalized in the acute stage of injury and subsequently for rehabilitation and the treatment of complications. The purpose of this study was to explore coping strategies used by persons with SCI during their rehabilitation period. A qualitative study was conducted among persons with SCI, undergoing rehabilitation at the Rheumatology and Rehabilitation Hospitals, Ragama and Digana Sri Lanka. Twelve participants were selected purposively to represent both males and females, with cervical, thoracic or lumbar levels of injuries due to traumatic and non-traumatic causes as well as from different socioeconomic backgrounds. Informed consent was taken from the participants. In-depth interviews were conducted using an interview guide to collect data. Probes were used to get more information and to encourage participants. Interviews were audio taped and transcribed verbatim. Qualitative content analysis was conducted. Ethical approval for this study was obtained from the Ethics Review Committee, Faculty of Medicine, University of Kelaniya. Five themes were identified in the content analysis: social support, religious beliefs, determination, acceptance and making comparisons. Participants indicated that the support from their family members had been an essential factor in coping, after sustaining an SCI and they expressed the importance of emotional support from family members during their rehabilitation. Many participants had a strong belief towards the God, who had a personal interest in their lives, played an important role in their ability to cope with the injury. They believed that what happens to them in this life results from their actions in previous lives. They expressed that determination was essential as a factor that helps them cope with their injury. They indicated their focus on the positive aspects of the life and accepted the disability. They made comparisons to other persons who were worse off than them to help lift them out of unpleasant experience. Even some of the most severely injured and disabled participants presented evidence of using this coping strategy. Identification of coping strategies used by persons with SCI will help nurses and other health-care professionals in reinforcing the most effective coping strategies among persons with SCI. The findings recommend that engagement coping positively influences psychosocial adaptation.

Keywords: content analysis, coping strategies, rehabilitation, spinal cord injury

Procedia PDF Downloads 185
578 Diversity and Distribution Ecology of Coprophilous Mushrooms of Family Psathyrellaceae from Punjab, India

Authors: Amandeep Kaur, Ns Atri, Munruchi Kaur

Abstract:

Mushrooms have shaped our environment in ways that we are only beginning to understand. The weather patterns, topography, flora and fauna of Punjab state in India create favorable growing conditions for thousands of species of mushrooms, but the complete region was unexplored when it comes to coprophilous mushrooms growing on herbivorous dung. Coprophilous mushrooms are the most specialized fungi ecologically, which germinate and grow directly on different types of animal dung or on manured soil. In the present work, the diversity of coprophilous mushrooms' of Family Psathyrellaceae of the order Agaricales is explored, their relationship to the human world is sketched out, and their supreme significance to life on this planet is revealed. During the investigation, different dung localities from 16 districts of Punjab state have been explored for the collection of material. The macroscopic features of the collected mushrooms were documented on the Field key. The hand cut sections of the various parts of carpophore, such as pileus, gills, stipe and the basidiospores details, were studied microscopically under different magnification. Various authentic publications were consulted for the identification of the investigated taxa. The classification, authentic names and synonyms of the investigated taxa are as per the latest version of Dictionary of Fungi and the MycoBank. The present work deals with the taxonomy of 81 collections belonging to 39 species spread over 05 coprophilous genera, namely Psathyrella, Panaeolus, Parasola, Coprinopsis, and Coprinellus of family Psathyrellaceae. In the text, the investigated taxa have been arranged as they appear in the key to the genera and species investigated. In this work, have been thoroughly examined for their macroscopic, microscopic, ecological, and chemical reaction details. The authors dig deeper to give indication of their ecology and the dung type where they can be obtained. Each taxon is accompanied by a detailed listing of its prominent features and an illustration with habitat photographs and line drawings of morphological and anatomical features. Taxa are organized as per their status in the keys, which allow easy recognition. All the taxa are compared with similar taxa. The study has shown that dung is an important substrate which serves as a favorable niche for the growth of a variety of mushrooms. This paper shows an insight what short-lived coprophilous mushrooms can teach us about sustaining life on earth!

Keywords: abundance, basidiomycota, biodiversity, seasonal availability, systematics

Procedia PDF Downloads 65
577 Algorithm for Modelling Land Surface Temperature and Land Cover Classification and Their Interaction

Authors: Jigg Pelayo, Ricardo Villar, Einstine Opiso

Abstract:

The rampant and unintended spread of urban areas resulted in increasing artificial component features in the land cover types of the countryside and bringing forth the urban heat island (UHI). This paved the way to wide range of negative influences on the human health and environment which commonly relates to air pollution, drought, higher energy demand, and water shortage. Land cover type also plays a relevant role in the process of understanding the interaction between ground surfaces with the local temperature. At the moment, the depiction of the land surface temperature (LST) at city/municipality scale particularly in certain areas of Misamis Oriental, Philippines is inadequate as support to efficient mitigations and adaptations of the surface urban heat island (SUHI). Thus, this study purposely attempts to provide application on the Landsat 8 satellite data and low density Light Detection and Ranging (LiDAR) products in mapping out quality automated LST model and crop-level land cover classification in a local scale, through theoretical and algorithm based approach utilizing the principle of data analysis subjected to multi-dimensional image object model. The paper also aims to explore the relationship between the derived LST and land cover classification. The results of the presented model showed the ability of comprehensive data analysis and GIS functionalities with the integration of object-based image analysis (OBIA) approach on automating complex maps production processes with considerable efficiency and high accuracy. The findings may potentially lead to expanded investigation of temporal dynamics of land surface UHI. It is worthwhile to note that the environmental significance of these interactions through combined application of remote sensing, geographic information tools, mathematical morphology and data analysis can provide microclimate perception, awareness and improved decision-making for land use planning and characterization at local and neighborhood scale. As a result, it can aid in facilitating problem identification, support mitigations and adaptations more efficiently.

Keywords: LiDAR, OBIA, remote sensing, local scale

Procedia PDF Downloads 282
576 Development and Validation of a Quantitative Measure of Engagement in the Analysing Aspect of Dialogical Inquiry

Authors: Marcus Goh Tian Xi, Alicia Chua Si Wen, Eunice Gan Ghee Wu, Helen Bound, Lee Liang Ying, Albert Lee

Abstract:

The Map of Dialogical Inquiry provides a conceptual look at the underlying nature of future-oriented skills. According to the Map, learning is learner-oriented, with conversational time shifted from teachers to learners, who play a strong role in deciding what and how they learn. For example, in courses operating on the principles of Dialogical Inquiry, learners were able to leave the classroom with a deeper understanding of the topic, broader exposure to differing perspectives, and stronger critical thinking capabilities, compared to traditional approaches to teaching. Despite its contributions to learning, the Map is grounded in a qualitative approach both in its development and its application for providing feedback to learners and educators. Studies hinge on openended responses by Map users, which can be time consuming and resource intensive. The present research is motivated by this gap in practicality by aiming to develop and validate a quantitative measure of the Map. In addition, a quantifiable measure may also strengthen applicability by making learning experiences trackable and comparable. The Map outlines eight learning aspects that learners should holistically engage. This research focuses on the Analysing aspect of learning. According to the Map, Analysing has four key components: liking or engaging in logic, using interpretative lenses, seeking patterns, and critiquing and deconstructing. Existing scales of constructs (e.g., critical thinking, rationality) related to these components were identified so that the current scale could adapt items from. Specifically, items were phrased beginning with an “I”, followed by an action phrase, to fulfil the purpose of assessing learners' engagement with Analysing either in general or in classroom contexts. Paralleling standard scale development procedure, the 26-item Analysing scale was administered to 330 participants alongside existing scales with varying levels of association to Analysing, to establish construct validity. Subsequently, the scale was refined and its dimensionality, reliability, and validity were determined. Confirmatory factor analysis (CFA) revealed if scale items loaded onto the four factors corresponding to the components of Analysing. To refine the scale, items were systematically removed via an iterative procedure, according to their factor loadings and results of likelihood ratio tests at each step. Eight items were removed this way. The Analysing scale is better conceptualised as unidimensional, rather than comprising the four components identified by the Map, for three reasons: 1) the covariance matrix of the model specified for the CFA was not positive definite, 2) correlations among the four factors were high, and 3) exploratory factor analyses did not yield an easily interpretable factor structure of Analysing. Regarding validity, since the Analysing scale had higher correlations with conceptually similar scales than conceptually distinct scales, with minor exceptions, construct validity was largely established. Overall, satisfactory reliability and validity of the scale suggest that the current procedure can result in a valid and easy-touse measure for each aspect of the Map.

Keywords: analytical thinking, dialogical inquiry, education, lifelong learning, pedagogy, scale development

Procedia PDF Downloads 91
575 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330
574 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 389
573 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces

Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi

Abstract:

For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.

Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction

Procedia PDF Downloads 96