Search results for: slow-moving system
381 Cultural Competence in Palliative Care
Authors: Mariia Karizhenskaia, Tanvi Nandani, Ali Tafazoli Moghadam
Abstract:
Hospice palliative care (HPC) is one of the most complicated philosophies of care in which physical, social/cultural, and spiritual aspects of human life are intermingled with an undeniably significant role in every aspect. Among these dimensions of care, culture possesses an outstanding position in the process and goal determination of HPC. This study shows the importance of cultural elements in the establishment of effective and optimized structures of HPC in the Canadian healthcare environment. Our systematic search included Medline, Google Scholar, and St. Lawrence College Library, considering original, peer-reviewed research papers published from 1998 to 2023 to identify recent national literature connecting culture and palliative care delivery. The most frequently presented feature among the articles is the role of culture in the efficiency of the HPC. It has been shown frequently that including the culturespecific parameters of each nation in this system of care is vital for its success. On the other hand, ignorance about the exclusive cultural trends in a specific location has been accompanied by significant failure rates. Accordingly, implementing a culture-wise adaptable approach is mandatory for multicultural societies. The following outcome of research studies in this field underscores the importance of culture-oriented education for healthcare staff. Thus, all the practitioners involved in HPC will recognize the importance of traditions, religions, and social habits for processing the care requirements. Cultural competency training is a telling sample of the establishment of this strategy in health care that has come to the aid of HPC in recent years. Another complexity of the culturized HPC nowadays is the long-standing issue of racialization. Systematic and subconscious deprivation of minorities has always been an adversity of advanced levels of care. The last part of the constellation of our research outcomes is comprised of the ethical considerations of culturally driven HPC. This part is the most sophisticated aspect of our topic because almost all the analyses, arguments, and justifications are subjective. While there was no standard measure for ethical elements in clinical studies with palliative interventions, many research teams endorsed applying ethical principles for all the involved patients. Notably, interpretations and projections of ethics differ in varying cultural backgrounds. Therefore, healthcare providers should always be aware of the most respectable methodologies of HPC on a case-by-case basis. Cultural training programs have been utilized as one of the main tactics to improve the ability of healthcare providers to address the cultural needs and preferences of diverse patients and families. In this way, most of the involved health care practitioners will be equipped with cultural competence. Considerations for ethical and racial specifications of the clients of this service will boost the effectiveness and fruitfulness of the HPC. Canadian society is a colorful compilation of multiple nationalities; accordingly, healthcare clients are diverse, and this divergence is also translated into HPC patients. This fact justifies the importance of studying all the cultural aspects of HPC to provide optimal care on this enormous land.Keywords: cultural competence, end-of-life care, hospice, palliative care
Procedia PDF Downloads 73380 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 93379 The Social Aspects of Mental Illness among Orthodox Christians of the Tigrinya Ethnic Group in Eritrea
Authors: Erimias Firre
Abstract:
This study is situated within the religio-cultural milieu of Coptic Orthodox Christians of the Tigrinya ethnic group in Eritrea. With this ethnic group being conservative and traditionally bound, extended family structures dissected along various clans and expansive community networks are the distinguishing mark of its members. Notably, Coptic Tigrinya constitutes the largest percentage of all Christian denominations in Eritrea. As religious, cultural beliefs, rituals and teachings permeate in all aspects of social life, a distinct worldview and traditionalized health and illness conceptualization are common. Accordingly, this study argues that religio-culturally bound illness ideologies immensely determine the perception, help seeking behavior and healing preference of Coptic Tigrinya in Eritrea. The study bears significance in the sense that it bridges an important knowledge gap, given that it is ethno-linguistically (within the Tigrinya ethnic group), spatially (central region of Eritrea) and religiously (Coptic Christianity) specific. The conceptual framework guiding this research centered on the social determinants of mental health, and explores through the lens of critical theory how existing systems generate social vulnerability and structural inequality, providing a platform to reveal how the psychosocial model has the capacity to emancipate and empower those with mental disorders to live productive and meaningful lives. A case study approach was employed to explore the interrelationship between religio-cultural beliefs and practices and perception of common mental disorders of depression, anxiety, bipolar affective, schizophrenia and post-traumatic stress disorders and the impact of these perceptions on people with those mental disorders. Purposive sampling was used to recruit 41 participants representing seven diverse cohorts; people with common mental disorders, family caregivers, general community members, ex-fighters , priests, staff at St. Mary’s and Biet-Mekae Community Health Center; resulting in rich data for thematic analysis. Findings highlighted current religio-cultural perceptions, causes and treatment of mental disorders among Coptic Tigrinya result in widespread labelling, stigma and discrimination, both of those with mental disorders and their families. Traditional healing sources are almost exclusively tried, sometimes for many years, before families and sufferers seek formal medical assessment and treatment, resulting difficult to treat illness chronicity. Service gaps in the formal medical system result in the inability to meet the principles enshrined in the WHO Mental Health Action Plan 2013-2020 to which the Eritrean Government is a signatory. However, the study found that across all participant cohorts, there was a desire for change that will create a culture whereby those with mental disorders will have restored hope, connectedness, healing and self-determination.Keywords: Coptic Tigrinya, mental disorders, psychosocial model social integration and recovery, traditional healing
Procedia PDF Downloads 185378 E-Waste Generation in Bangladesh: Present and Future Estimation by Material Flow Analysis Method
Authors: Rowshan Mamtaz, Shuvo Ahmed, Imran Noor, Sumaiya Rahman, Prithvi Shams, Fahmida Gulshan
Abstract:
Last few decades have witnessed a phenomenal rise in the use of electrical and electronic equipment globally in our everyday life. As these items reach the end of their lifecycle, they turn into e-wastes and contribute to the waste stream. Bangladesh, in conformity with the global trend and due to its ongoing rapid growth, is also using electronics-based appliances and equipment at an increasing rate. This has caused a corresponding increase in the generation of e-wastes. Bangladesh is a developing country; its overall waste management system, is not yet efficient, nor is it environmentally sustainable. Most of its solid wastes are disposed of in a crude way at dumping sites. Addition of e-wastes, which often contain toxic heavy metals, into its waste stream has made the situation more difficult and challenging. Assessment of generation of e-wastes is an important step towards addressing the challenges posed by e-wastes, setting targets, and identifying the best practices for their management. Understanding and proper management of e-wastes is a stated item of the Sustainable Development Goals (SDG) campaign, and Bangladesh is committed to fulfilling it. A better understanding and availability of reliable baseline data on e-wastes will help in preventing illegal dumping, promote recycling, and create jobs in the recycling sectors and thus facilitate sustainable e-waste management. With this objective in mind, the present study has attempted to estimate the amount of e-wastes and its future generation trend in Bangladesh. To achieve this, sales data on eight selected electrical and electronic products (TV, Refrigerator, Fan, Mobile phone, Computer, IT equipment, CFL (Compact Fluorescent Lamp) bulbs, and Air Conditioner) have been collected from different sources. Primary and secondary data on the collection, recycling, and disposal of the e-wastes have also been gathered by questionnaire survey, field visits, interviews, and formal and informal meetings with the stakeholders. Material Flow Analysis (MFA) method has been applied, and mathematical models have been developed in the present study to estimate e-waste amounts and their future trends up to the year 2035 for the eight selected electrical and electronic equipment. End of life (EOL) method is adopted in the estimation. Model inputs are products’ annual sale/import data, past and future sales data, and average life span. From the model outputs, it is estimated that the generation of e-wastes in Bangladesh in 2018 is 0.40 million tons and by 2035 the amount will be 4.62 million tons with an average annual growth rate of 20%. Among the eight selected products, the number of e-wastes generated from seven products are increasing whereas only one product, CFL bulb, showed a decreasing trend of waste generation. The average growth rate of e-waste from TV sets is the highest (28%) while those from Fans and IT equipment are the lowest (11%). Field surveys conducted in the e-waste recycling sector also revealed that every year around 0.0133 million tons of e-wastes enter into the recycling business in Bangladesh which may increase in the near future.Keywords: Bangladesh, end of life, e-waste, material flow analysis
Procedia PDF Downloads 196377 Seawater Desalination for Production of Highly Pure Water Using a Hydrophobic PTFE Membrane and Direct Contact Membrane Distillation (DCMD)
Authors: Ahmad Kayvani Fard, Yehia Manawi
Abstract:
Qatar’s primary source of fresh water is through seawater desalination. Amongst the major processes that are commercially available on the market, the most common large scale techniques are Multi-Stage Flash distillation (MSF), Multi Effect distillation (MED), and Reverse Osmosis (RO). Although commonly used, these three processes are highly expensive down to high energy input requirements and high operating costs allied with maintenance and stress induced on the systems in harsh alkaline media. Beside that cost, environmental footprint of these desalination techniques are significant; from damaging marine eco-system, to huge land use, to discharge of tons of GHG and huge carbon footprint. Other less energy consuming techniques based on membrane separation are being sought to reduce both the carbon footprint and operating costs is membrane distillation (MD). Emerged in 1960s, MD is an alternative technology for water desalination attracting more attention since 1980s. MD process involves the evaporation of a hot feed, typically below boiling point of brine at standard conditions, by creating a water vapor pressure difference across the porous, hydrophobic membrane. Main advantages of MD compared to other commercially available technologies (MSF and MED) and specially RO are reduction of membrane and module stress due to absence of trans-membrane pressure, less impact of contaminant fouling on distillate due to transfer of only water vapor, utilization of low grade or waste heat from oil and gas industries to heat up the feed up to required temperature difference across the membrane, superior water quality, and relatively lower capital and operating cost. To achieve the objective of this study, state of the art flat-sheet cross-flow DCMD bench scale unit was designed, commissioned, and tested. The objective of this study is to analyze the characteristics and morphology of the membrane suitable for DCMD through SEM imaging and contact angle measurement and to study the water quality of distillate produced by DCMD bench scale unit. Comparison with available literature data is undertaken where appropriate and laboratory data is used to compare a DCMD distillate quality with that of other desalination techniques and standards. Membrane SEM analysis showed that the PTFE membrane used for the study has contact angle of 127º with highly porous surface supported with less porous and bigger pore size PP membrane. Study on the effect of feed solution (salinity) and temperature on water quality of distillate produced from ICP and IC analysis showed that with any salinity and different feed temperature (up to 70ºC) the electric conductivity of distillate is less than 5 μS/cm with 99.99% salt rejection and proved to be feasible and effective process capable of consistently producing high quality distillate from very high feed salinity solution (i.e. 100000 mg/L TDS) even with substantial quality difference compared to other desalination methods such as RO and MSF.Keywords: membrane distillation, waste heat, seawater desalination, membrane, freshwater, direct contact membrane distillation
Procedia PDF Downloads 225376 Effect of Renin Angiotensin Pathway Inhibition on the Efficacy of Anti-programmed Cell Death (PD-1/L-1) Inhibitors in Advanced Non-small Cell Lung Cancer Patients- Comparison of Single Hospital Retrospective Assessment to the Published Literature
Authors: Esther Friedlander, Philip Friedlander
Abstract:
The use of immunotherapy that inhibits programmed death-1 (PD-1) or its ligand PD-L1 confers survival benefits in patients with non-small cell lung cancer (NSCLC). However, approximately 45% of patients experience primary treatment resistance, necessitating the development of strategies to improve efficacy. While the renin-angiotensin system (RAS) has systemic hemodynamic effects, tissue-specific regulation exists along with modulation of immune activity in part through regulation of myeloid cell activity, leading to the hypothesis that RAS inhibition may improve anti-PD-1/L-1 efficacy. A retrospective analysis was conducted that included 173 advanced solid tumor cancer patients treated at Valley Hospital, a community Hospital in New Jersey, USA, who were treated with a PD-1/L-1 inhibitor in a defined time period showing a statistically significant relationship between RAS pathway inhibition (RASi through concomitant treatment with an ACE inhibitor or angiotensin receptor blocker) and positive efficacy to the immunotherapy that was independent of age, gender and cancer type. Subset analysis revealed strong numerical benefit for efficacy in both patients with squamous and nonsquamous NSCLC as determined by documented clinician assessment of efficacy and by duration of therapy. A PUBMED literature search was now conducted to identify studies assessing the effect of RAS pathway inhibition on anti-PD-1/L1 efficacy in advanced solid tumor patients and compare these findings to those seen in the Valley Hospital retrospective study with a focus on NSCLC specifically. A total of 11 articles were identified assessing the effects of RAS pathway inhibition on the efficacy of checkpoint inhibitor immunotherapy in advanced cancer patients. Of the 11 studies, 10 assessed the effect on survival of RASi in the context of treatment with anti-PD-1/PD-L1, while one assessed the effect on CTLA-4 inhibition. Eight of the studies included patients with NSCLC, while the remaining 2 were specific to genitourinary malignancies. Of the 8 studies, two were specific to NSCLC patients, with the remaining 6 studies including a range of cancer types, of which NSCLC was one. Of these 6 studies, only 2 reported specific survival data for the NSCLC subpopulation. Patient characteristics, multivariate analysis data and efficacy data seen in the 2 NSLCLC specific studies and in the 2 basket studies, which provided data on the NSCLC subpopulation, were compared to that seen in the Valley Hospital retrospective study supporting a broader effect of RASi on anti-PD-1/L1 efficacy in advanced NSLCLC with the majority of studies showing statistically significant benefit or strong statistical trends but with one study demonstrating worsened outcomes. This comparison of studies extends published findings to the community hospital setting and supports prospective assessment through randomized clinical trials of efficacy in NSCLC patients with pharmacodynamic components to determine the effect on immune cell activity in tumors and on the composition of the tumor microenvironment.Keywords: immunotherapy, cancer, angiotensin, efficacy, PD-1, lung cancer, NSCLC
Procedia PDF Downloads 68375 Negative Perceptions of Ageing Predicts Greater Dysfunctional Sleep Related Cognition Among Adults Aged 60+
Authors: Serena Salvi
Abstract:
Ageistic stereotypes and practices have become a normal and therefore pervasive phenomenon in various aspects of everyday life. Over the past years, renewed awareness towards self-directed age stereotyping in older adults has given rise to a line of research focused on the potential role of attitudes towards ageing on seniors’ health and functioning. This set of studies has showed how a negative internalisation of ageistic stereotypes would discourage older adults in seeking medical advice, in addition to be associated to negative subjective health evaluation. An important dimension of mental health that is often affected in older adults is represented by sleep quality. Self-reported sleep quality among older adults has shown to be often unreliable when compared to their objective sleep measures. Investigations focused on self-reported sleep quality among older adults have suggested how this portion of the population would tend to accept disrupted sleep if believed to be up to standard for their age. On the other hand, unrealistic expectations, and dysfunctional beliefs towards sleep in ageing, might prompt older adults to report sleep disruption even in the absence of objective disrupted sleep. Objective of this study is to examine an association between personal attitudes towards ageing in adults aged 60+ and dysfunctional sleep related cognition. More in detail, this study aims to investigate a potential association between personal attitudes towards ageing, sleep locus of control and dysfunctional beliefs towards sleep among this portion of the population. Data in this study were statistically analysed in SPSS software. Participants were recruited through the online participants recruitment system Prolific. Inclusion of attention check questions throughout the questionnaire and consistency of responses were looked at. Prior to the commencement of this study, Ethical Approval was granted (ref. 39396). Descriptive statistics were used to determine the frequency, mean, and SDs of the variables. Pearson coefficient was used for interval variables, independent T-test for comparing means between two independent groups, analysis of variance (ANOVA) test for comparing the means in several independent groups, and hierarchical linear regression models for predicting criterion variables based on predictor variables. In this study self-perceptions of ageing were assessed using APQ-B’s subscales, while dysfunctional sleep related cognition was operationalised using the SLOC and the DBAS16 scales. Of the final subscales taken in consideration in the brief version of the APQ questionnaire, Emotional Representations (ER), Control Positive (PC) and Control and Consequences Negative (NC) have shown to be of particularly relevance for the remits of this study. Regression analysis show how an increase in the APQ-B subscale Emotional Representations (ER) predicts an increase in dysfunctional beliefs and attitudes towards sleep in this sample, after controlling for subjective sleep quality, level of depression and chronological age. A second regression analysis showed that APQ-B subscales Control Positive (PC) and Control and Consequences Negative (NC) were significant predictors in the change of variance of SLOC, after controlling for subjective sleep quality, level of depression and dysfunctional beliefs about sleep.Keywords: sleep-related cognition, perceptions of aging, older adults, sleep quality
Procedia PDF Downloads 102374 Evaluation of Antibiotic Resistance and Extended-Spectrum β-Lactamases Production Rates of Gram Negative Rods in a University Research and Practice Hospital, 2012-2015
Authors: Recep Kesli, Cengiz Demir, Onur Turkyilmaz, Hayriye Tokay
Abstract:
Objective: Gram-negative rods are a large group of bacteria, and include many families, genera, and species. Most clinical isolates belong to the family Enterobacteriaceae. Resistance due to the production of extended-spectrum β-lactamases (ESBLs) is a difficulty in the handling of Enterobacteriaceae infections, but other mechanisms of resistance are also emerging, leading to multidrug resistance and threatening to create panresistant species. We aimed in this study to evaluate resistance rates of Gram-negative rods bacteria isolated from clinical specimens in Microbiology Laboratory, Afyon Kocatepe University, ANS Research and Practice Hospital, between October 2012 and September 2015. Methods: The Gram-negative rods strains were identified by conventional methods and VITEK 2 automated identification system (bio-Mérieux, Marcy l’etoile, France). Antibiotic resistance tests were performed by both the Kirby-Bauer disk-diffusion and automated Antimicrobial Susceptibility Testing (AST, bio-Mérieux, Marcy l’etoile, France) methods. Disk diffusion results were evaluated according to the standards of Clinical and Laboratory Standards Institute (CLSI). Results: Of the totally isolated 1.701 Enterobacteriaceae strains 1434 (84,3%) were Klebsiella pneumoniae, 171 (10%) were Enterobacter spp., 96 (5.6%) were Proteus spp., and 639 Nonfermenting gram negatives, 477 (74.6%) were identified as Pseudomonas aeruginosa, 135 (21.1%) were Acinetobacter baumannii and 27 (4.3%) were Stenotrophomonas maltophilia. The ESBL positivity rate of the totally studied Enterobacteriaceae group were 30.4%. Antibiotic resistance rates for Klebsiella pneumoniae were as follows: amikacin 30.4%, gentamicin 40.1%, ampicillin-sulbactam 64.5%, cefepime 56.7%, cefoxitin 35.3%, ceftazidime 66.8%, ciprofloxacin 65.2%, ertapenem 22.8%, imipenem 20.5%, meropenem 20.5 %, and trimethoprim-sulfamethoxazole 50.1%, and for 114 Enterobacter spp were detected as; amikacin 26.3%, gentamicin 31.5%, cefepime 26.3%, ceftazidime 61.4%, ciprofloxacin 8.7%, ertapenem 8.7%, imipenem 12.2%, meropenem 12.2%, and trimethoprim-sulfamethoxazole 19.2 %. Resistance rates for Proteus spp. were: 24,3% meropenem, 26.2% imipenem, 20.2% amikacin 10.5% cefepim, 33.3% ciprofloxacin and levofloxacine, 31.6% ceftazidime, 20% ceftriaxone, 15.2% gentamicin, 26.6% amoxicillin-clavulanate, and 26.2% trimethoprim-sulfamethoxale. Resistance rates of P. aeruginosa was found as follows: Amikacin 32%, gentamicin 42 %, imipenem 43%, merpenem 43%, ciprofloxacin 50%, levofloxacin 52%, cefepim 38%, ceftazidim 63%, piperacillin/tacobactam 85%, for Acinetobacter baumannii; Amikacin 53.3%, gentamicin 56.6 %, imipenem 83%, merpenem 86%, ciprofloxacin 100%, ceftazidim 100%, piperacillin/tacobactam 85 %, colisitn 0 %, and for S. malthophilia; levofloxacin 66.6 % and trimethoprim/sulfamethoxozole 0 %. Conclusions: This study showed that resistance in Gram-negative rods was a serious clinical problem in our hospital and suggested the need to perform typification of the isolated bacteria with susceptibility testing regularly in the routine laboratory procedures. This application guided to empirical antibiotic treatment choices truly, as a consequence of the reality that each hospital shows different resistance profiles.Keywords: antibiotic resistance, gram negative rods, ESBL, VITEK 2
Procedia PDF Downloads 330373 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa
Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak
Abstract:
Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 199372 Financing the Welfare State in the United States: The Recent American Economic and Ideological Challenges
Authors: Rafat Fazeli, Reza Fazeli
Abstract:
This paper focuses on the study of the welfare state and social wage in the leading liberal economy of the United States. The welfare state acquired a broad acceptance as a major socioeconomic achievement of the liberal democracy in the Western industrialized countries during the postwar boom period. The modern and modified vision of capitalist democracy offered, on the one hand, the possibility of high growth rate and, on the other hand, the possibility of continued progression of a comprehensive system of social support for a wider population. The economic crises of the 1970s, provided the ground for a great shift in economic policy and ideology in several Western countries, most notably the United States and the United Kingdom (and to a lesser extent Canada under Prime Minister Brian Mulroney). In the 1980s, the free market oriented reforms undertaken under Reagan and Thatcher greatly affected the economic outlook not only of the United States and the United Kingdom, but of the whole Western world. The movement which was behind this shift in policy is often called neo-conservatism. The neoconservatives blamed the transfer programs for the decline in economic performance during the 1970s and argued that cuts in spending were required to go back to the golden age of full employment. The agenda for both Reagan and Thatcher administrations was rolling back the welfare state, and their budgets included a wide range of cuts for social programs. The question is how successful were Reagan and Thatcher’s efforts to achieve retrenchment? The paper involves an empirical study concerning the distributive role of the welfare state in the two countries. Other studies have often concentrated on the redistributive effect of fiscal policy on different income brackets. This study examines the net benefit/ burden position of the working population with respect to state expenditures and taxes in the postwar period. This measurement will enable us to find out whether the working population has received a net gain (or net social wage). This study will discuss how the expansion of social expenditures and the trend of the ‘net social wage’ can be linked to distinct forms of economic and social organizations. This study provides an empirical foundation for analyzing the growing significance of ‘social wage’ or the collectivization of consumption and the share of social or collective consumption in total consumption of the working population in the recent decades. The paper addresses three other major questions. The first question is whether the expansion of social expenditures has posed any drag on capital accumulation and economic growth. The findings of this study provide an analytical foundation to evaluate the neoconservative claim that the welfare state is itself the source of economic stagnation that leads to the crisis of the welfare state. The second question is whether the increasing ideological challenges from the right and the competitive pressures of globalization have led to retrenchment of the American welfare states in the recent decades. The third question is how social policies have performed in the presence of the rising inequalities in the recent decades.Keywords: the welfare state, social wage, The United States, limits to growth
Procedia PDF Downloads 208371 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 67370 Ultrasound Disintegration as a Potential Method for the Pre-Treatment of Virginia Fanpetals (Sida hermaphrodita) Biomass before Methane Fermentation Process
Authors: Marcin Dębowski, Marcin Zieliński, Mirosław Krzemieniewski
Abstract:
As methane fermentation is a complex series of successive biochemical transformations, its subsequent stages are determined, to a various extent, by physical and chemical factors. A specific state of equilibrium is being settled in the functioning fermentation system between environmental conditions and the rate of biochemical reactions and products of successive transformations. In the case of physical factors that influence the effectiveness of methane fermentation transformations, the key significance is ascribed to temperature and intensity of biomass agitation. Among the chemical factors, significant are pH value, type, and availability of the culture medium (to put it simply: the C/N ratio) as well as the presence of toxic substances. One of the important elements which influence the effectiveness of methane fermentation is the pre-treatment of organic substrates and the mode in which the organic matter is made available to anaerobes. Out of all known and described methods for organic substrate pre-treatment before methane fermentation process, the ultrasound disintegration is one of the most interesting technologies. Investigations undertaken on the ultrasound field and the use of installations operating on the existing systems result principally from very wide and universal technological possibilities offered by the sonication process. This physical factor may induce deep physicochemical changes in ultrasonicated substrates that are highly beneficial from the viewpoint of methane fermentation processes. In this case, special role is ascribed to disintegration of biomass that is further subjected to methane fermentation. Once cell walls are damaged, cytoplasm and cellular enzymes are released. The released substances – either in dissolved or colloidal form – are immediately available to anaerobic bacteria for biodegradation. To ensure the maximal release of organic matter from dead biomass cells, disintegration processes are aimed to achieve particle size below 50 μm. It has been demonstrated in many research works and in systems operating in the technical scale that immediately after substrate supersonication the content of organic matter (characterized by COD, BOD5 and TOC indices) was increasing in the dissolved phase of sedimentation water. This phenomenon points to the immediate sonolysis of solid substances contained in the biomass and to the release of cell material, and consequently to the intensification of the hydrolytic phase of fermentation. It results in a significant reduction of fermentation time and increased effectiveness of production of gaseous metabolites of anaerobic bacteria. Because disintegration of Virginia fanpetals biomass via ultrasounds applied in order to intensify its conversion is a novel technique, it is often underestimated by exploiters of agri-biogas works. It has, however, many advantages that have a direct impact on its technological and economical superiority over thus far applied methods of biomass conversion. As for now, ultrasound disintegrators for biomass conversion are not produced on the mass-scale, but by specialized groups in scientific or R&D centers. Therefore, their quality and effectiveness are to a large extent determined by their manufacturers’ knowledge and skills in the fields of acoustics and electronic engineering.Keywords: ultrasound disintegration, biomass, methane fermentation, biogas, Virginia fanpetals
Procedia PDF Downloads 365369 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 160368 Impact of Climate Change on Crop Production: Climate Resilient Agriculture Is the Need of the Hour
Authors: Deepak Loura
Abstract:
Climate change is considered one of the major environmental problems of the 21st century and a lasting change in the statistical distribution of weather patterns over periods ranging from decades to millions of years. Agriculture and climate change are internally correlated with each other in various aspects, as the threat of varying global climate has greatly driven the attention of scientists, as these variations are imparting a negative impact on global crop production and compromising food security worldwide. The fast pace of development and industrialization and indiscriminate destruction of the natural environment, more so in the last century, have altered the concentration of atmospheric gases that lead to global warming. Carbon dioxide (CO₂), methane (CH₄), and nitrous oxide (NO) are important biogenic greenhouse gases (GHGs) from the agricultural sector contributing to global warming and their concentration is increasing alarmingly. Agricultural productivity can be affected by climate change in 2 ways: first, directly, by affecting plant growth development and yield due to changes in rainfall/precipitation and temperature and/or CO₂ levels, and second, indirectly, there may be considerable impact on agricultural land use due to snow melt, availability of irrigation, frequency and intensity of inter- and intra-seasonal droughts and floods, soil organic matter transformations, soil erosion, distribution and frequency of infestation by insect pests, diseases or weeds, the decline in arable areas (due to submergence of coastal lands), and availability of energy. An increase in atmospheric CO₂ promotes the growth and productivity of C3 plants. On the other hand, an increase in temperature, can reduce crop duration, increase crop respiration rates, affect the equilibrium between crops and pests, hasten nutrient mineralization in soils, decrease fertilizer- use efficiencies, and increase evapotranspiration among others. All these could considerably affect crop yield in long run. Climate resilient agriculture consisting of adaptation, mitigation, and other agriculture practices can potentially enhance the capacity of the system to withstand climate-related disturbances by resisting damage and recovering quickly. Climate resilient agriculture turns the climate change threats that have to be tackled into new business opportunities for the sector in different regions and therefore provides a triple win: mitigation, adaptation, and economic growth. Improving the soil organic carbon stock of soil is integral to any strategy towards adapting to and mitigating the abrupt climate change, advancing food security, and improving the environment. Soil carbon sequestration is one of the major mitigation strategies to achieve climate-resilient agriculture. Climate-smart agriculture is the only way to lower the negative impact of climate variations on crop adaptation before it might affect global crop production drastically. To cope with these extreme changes, future development needs to make adjustments in technology, management practices, and legislation. Adaptation and mitigation are twin approaches to bringing resilience to climate change in agriculture.Keywords: climate change, global warming, crop production, climate resilient agriculture
Procedia PDF Downloads 72367 Modeling of Hot Casting Technology of Beryllium Oxide Ceramics with Ultrasonic Activation
Authors: Zamira Sattinova, Tassybek Bekenov
Abstract:
The article is devoted to modeling the technology of hot casting of beryllium oxide ceramics. The stages of ultrasonic activation of beryllium oxide slurry in the plant vessel to improve the rheological property, hot casting in the moulding cavity with cooling and solidification of the casting are described. Thermoplastic slurry (hereinafter referred to as slurry) shows the rheology of a non-Newtonian fluid with yield and plastic viscosity. Cooling-solidification of the slurry in the forming cavity occurs in the liquid, taking into account crystallization and solid state. In this work is the method of calculation of hot casting of the slurry using the method of effective molecular viscosity of viscoplastic fluid. It is shown that the slurry near the cooled wall is in a state of crystallization and plasticity, and the rest may still be in the liquid phase. Nonuniform distribution of temperature, density and concentration of kinetically free binder takes place along the cavity section. This leads to compensation of shrinkage by the influx of slurry from the liquid into the crystallization zones and plasticity of the castings. In the plasticity zone, the shrinkage determined by the concentration of kinetically free binder is compensated under the action of the pressure gradient. The solidification mechanism, as well as the mechanical behavior of the casting mass during casting, the rheological and thermophysical properties of the thermoplastic BeO slurry due to ultrasound exposure have not been well studied. Nevertheless, experimental data allow us to conclude that the effect of ultrasonic vibrations on the slurry mass leads to it: a change in structure, an increase in technological properties, a decrease in heterogeneity and a change in rheological properties. In the course of experiments, the effect of ultrasonic treatment and its duration on the change in viscosity and ultimate shear stress of the slurry depending on temperature (55-75℃) and the mass fraction of the binder (10 - 11.7%) have been studied. At the same time, changes in these properties before and after ultrasound exposure have been analyzed, as well as the nature of the flow in the system under study. The experience of operating the unit with ultrasonic impact has shown that at the same time, the casting capacity of the slurry increases by an average of 15%, and the viscosity decreases by more than half. Experimental study of physicochemical properties and phase change with simultaneous consideration of all factors affecting the quality of products in the process of continuous casting is labor-intensive. Therefore, an effective way to control the physical processes occurring in the formation of articles with predetermined properties and shapes is to simulate the process and determine its basic characteristics. The results of the calculations show the whole stage of hot casting of beryllium oxide slurry, taking into account the change in its state of aggregation. Ultrasonic treatment improves rheological properties and increases the fluidity of the slurry in the forming cavity. Calculations show the influence of velocity, temperature factors and structural data of the cavity on the cooling-solidification process of the casting. In the calculations, conditions for molding with shrinkage of the slurry by hot casting have been found, which makes it possible to obtain a solidifying product with a uniform beryllium oxide structure at the outlet of the cavity.Keywords: hot casting, thermoplastic slurry molding, shrinkage, beryllium oxide
Procedia PDF Downloads 22366 Improving the Uptake of Community-Based Multidrug-Resistant Tuberculosis Treatment Model in Nigeria
Authors: A. Abubakar, A. Parsa, S. Walker
Abstract:
Despite advances made in the diagnosis and management of drug-sensitive tuberculosis (TB) over the past decades, treatment of multidrug-resistant tuberculosis (MDR-TB) remains challenging and complex particularly in high burden countries including Nigeria. Treatment of MDR-TB is cost-prohibitive with success rate generally lower compared to drug-sensitive TB and if care is not taken it may become the dominant form of TB in future with many treatment uncertainties and substantial morbidity and mortality. Addressing these challenges requires collaborative efforts thorough sustained researches to evaluate the current treatment guidelines, particularly in high burden countries and prevent progression of resistance. To our best knowledge, there has been no research exploring the acceptability, effectiveness, and cost-effectiveness of community-based-MDR-TB treatment model in Nigeria, which is among the high burden countries. The previous similar qualitative study looks at the home-based management of MDR-TB in rural Uganda. This research aimed to explore patient’s views and acceptability of community-based-MDR-TB treatment model and to evaluate and compare the effectiveness and cost-effectiveness of community-based versus hospital-based MDR-TB treatment model of care from the Nigerian perspective. Knowledge of patient’s views and acceptability of community-based-MDR-TB treatment approach would help in designing future treatment recommendations and in health policymaking. Accordingly, knowledge of effectiveness and cost-effectiveness are part of the evidence needed to inform a decision about whether and how to scale up MDR-TB treatment, particularly in a poor resource setting with limited knowledge of TB. Mixed methods using qualitative and quantitative approach were employed. Qualitative data were obtained using in-depth semi-structured interviews with 21 MDR-TB patients in Nigeria to explore their views and acceptability of community-based MDR-TB treatment model. Qualitative data collection followed an iterative process which allowed adaptation of topic guides until data saturation. In-depth interviews were analyzed using thematic analysis. Quantitative data on treatment outcomes were obtained from medical records of MDR-TB patients to determine the effectiveness and direct and indirect costs were obtained from the patients using validated questionnaire and health system costs from the donor agencies to determine the cost-effectiveness difference between community and hospital-based model from the Nigerian perspective. Findings: Some themes have emerged from the patient’s perspectives indicating preference and high acceptability of community-based-MDR-TB treatment model by the patients and mixed feelings about the risk of MDR-TB transmission within the community due to poor infection control. The result of the modeling from the quantitative data is still on course. Community-based MDR-TB care was seen as the acceptable and most preferred model of care by the majority of the participants because of its convenience which in turn enhanced recovery, enables social interaction and offer more psychosocial benefits as well as averted productivity loss. However, there is a need to strengthen this model of care thorough enhanced strategies that ensure guidelines compliance and infection control in order to prevent the progression of resistance and curtail community transmission.Keywords: acceptability, cost-effectiveness, multidrug-resistant TB treatment, community and hospital approach
Procedia PDF Downloads 121365 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 21364 Forced Immigration to Turkey: The Socio-Spatial Impacts of Syrian Immigrants on Turkish Cities
Authors: Tolga Levent
Abstract:
Throughout the past few decades, forced immigration has been a significant problem for many developing countries. Turkey is one of those countries, which has experienced lots of forced immigration waves in the Republican era. However, the ongoing forced immigration wave of Syrians started with Syrian Civil War in 2011, is strikingly influential due to its intensity. In six years, approximately 3,4 million Syrians have entered to Turkey and presented high-level spatial concentrations in certain cities proximate to the Syrian border. These concentrations make Syrians and their problems relatively visible, especially in those cities. The problems of Syrians in Turkish cities could be associated with all dimensions of daily lives. Within economical dimension, high rates of Syrian unemployment push them to informal jobs offering very low wages. The financial aids they continuously demand from public authorities trigger anti-Syrian behaviors of local communities. Moreover, their relatively limited social adaptation capacities increase integration problems within social dimension day by day. Even, there are problems related to public health dimension such as the reappearance of certain child's illnesses due to the insufficiency of vaccination of Syrian children. These problems are significant but relatively easy to be prevented by using different types of management strategies and structural policies. However, there are other types of problems -urban problems- emerging with socio-spatial impacts of Syrians on Turkish cities in a very short period of time. There are relatively limited amount of studies about these impacts since they are difficult to be comprehended. The aim of the study, in this respect, is to understand these rapidly-emerging impacts and urban problems resulted from this massive immigration influx and to discuss new qualities of urban planning facing them. In the first part, there is a brief historical consideration of forced immigration waves in Turkey. These waves are important to make comparison with the ongoing immigration wave and to understand its significance. The second part is about quantitative and qualitative analyses of the spatial existence of Syrian immigrants in the city of Mersin, as an example of cities where Syrians are highly concentrated. By using official data from public authorities, quantitative statistical analyses are made to detect spatial concentrations of Syrians at neighborhood level. As methods of qualitative research, observations and in-depth interviews are used to define socio-spatial impacts of Syrians. The main results show that there emerges 'cities in cities' though sharp socio-spatial segregations which change density surfaces; produce unforeseen land-use patterns; result in inadequacies of public services and create degradations/deteriorations of urban environments occupied by Syrians. All these problems are significant; however, Turkish planning system does not have a capacity to cope with them. In the final part, there is a discussion about new qualities of urban planning facing these impacts and urban problems. The main point of discussion is the possibility of resilient urban planning under the conditions of uncertainty and unpredictability fostered by immigration crisis. Such a resilient planning approach might provide an option for countries aiming to cope with negative socio-spatial impacts of massive immigration influxes.Keywords: cities, forced immigration, Syrians, urban planning
Procedia PDF Downloads 255363 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution
Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai
Abstract:
Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning
Procedia PDF Downloads 371362 A Flexible Piezoelectric - Polymer Composite for Non-Invasive Detection of Multiple Vital Signs of Human
Authors: Sarah Pasala, Elizabeth Zacharias
Abstract:
Vital sign monitoring is crucial for both everyday health and medical diagnosis. A significant factor in assessing a human's health is their vital signs, which include heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings. Vital sign monitoring has been the focus of many system and method innovations recently. Piezoelectrics are materials that convert mechanical energy into electrical energy and can be used for vital sign monitoring. Piezoelectric energy harvesters that are stretchable and flexible can detect very low frequencies like airflow, heartbeat, etc. Current advancements in piezoelectric materials and flexible sensors have made it possible to create wearable and implantable medical devices that can continuously monitor physiological signals in humans. But because of their non-biocompatible nature, they also produce a large amount of e-waste and require another surgery to remove the implant. This paper presents a biocompatible and flexible piezoelectric composite material for wearable and implantable devices that offers a high-performance platform for seamless and continuous monitoring of human physiological signals and tactile stimuli. It also addresses the issue of e-waste and secondary surgery. A Lead-free piezoelectric, SrBi4Ti4O15, is found to be suitable for this application because the properties can be tailored by suitable substitutions and also by varying the synthesis temperature protocols. In the present work, SrBi4Ti4O15 modified by rare-earth has been synthesized and studied. Coupling factors are calculated from resonant (fr) and anti-resonant frequencies (fa). It is observed that Samarium substitution in SBT has increased the Curie temperature, dielectric and piezoelectric properties. From impedance spectroscopy studies, relaxation, and non-Debye type behaviour are observed. The composite of bioresorbable poly(l-lactide) and Lead-free rare earth modified Bismuth Layered Ferroelectrics leads to a flexible piezoelectric device for non-invasive measurement of vital signs, such as heart rate, breathing rate, blood pressure, and electrocardiogram (ECG) readings and also artery pulse signals in near-surface arteries. These composites are suitable to detect slight movement of the muscles and joints. This Lead-free rare earth modified Bismuth Layered Ferroelectrics – polymer composite is synthesized using a ball mill and the solid-state double sintering method. XRD studies indicated the two phases in the composite. SEM studies revealed the grain size to be uniform and in the range of 100 nm. The electromechanical coupling factor is improved. The elastic constants are calculated and the mechanical flexibility is found to be improved as compared to the single-phase rare earth modified Bismuth Latered piezoelectric. The results indicate that this composite is suitable for the non-invasive detection of multiple vital signs of humans.Keywords: composites, flexible, non-invasive, piezoelectric
Procedia PDF Downloads 37361 Cuban's Supply Chains Development Model: Qualitative and Quantitative Impact on Final Consumers
Authors: Teresita Lopez Joy, Jose A. Acevedo Suarez, Martha I. Gomez Acosta, Ana Julia Acevedo Urquiaga
Abstract:
Current trends in business competitiveness indicate the need to manage businesses as supply chains and not in isolation. The use of strategies aimed at maximum satisfaction of customers in a network and based on inter-company cooperation; contribute to obtaining successful joint results. In the Cuban economic context, the development of productive linkages to achieve integrated management of supply chains is considering a key aspect. In order to achieve this jump, it is necessary to develop acting capabilities in the entities that make up the chains through a systematic procedure that allows arriving at a management model in consonance with the environment. The objective of the research focuses on: designing a model and procedure for the development of integrated management of supply chains in economic entities. The results obtained are: the Model and the Procedure for the Development of the Supply Chains Integrated Management (MP-SCIM). The Model is based on the development of logistics in the network actors, the joint work between companies, collaborative planning and the monitoring of a main indicator according to the end customers. The application Procedure starts from the well-founded need for development in a supply chain and focuses on training entrepreneurs as doers. The characterization and diagnosis is done to later define the design of the network and the relationships between the companies. It takes into account the feedback as a method of updating the conditions and way to focus the objectives according to the final customers. The MP-SCIM is the result of systematic work with a supply chain approach in companies that have consolidated as coordinators of their network. The cases of the edible oil chain and explosives for construction sector reflect results of more remarkable advances since they have applied this approach for more than 5 years and maintain it as a general strategy of successful development. The edible oil trading company experienced a jump in sales. In 2006, the company started the analysis in order to define the supply chain, apply diagnosis techniques, define problems and implement solutions. The involvement of the management and the progressive formation of performance capacities in the personnel allowed the application of tools according to the context. The company that coordinates the explosives chain for construction sector shows adequate training with independence and opportunity in the face of different situations and variations of their business environment. The appropriation of tools and techniques for the analysis and implementation of proposals is a characteristic feature of this case. The coordinating entity applies integrated supply chain management to its decisions based on the timely training of the necessary action capabilities for each situation. Other cases of study and application that validate these tools are also detailed in this paper, and they highlight the results of generalization in the quantitative and qualitative improvement according to the final clients. These cases are: teaching literature in universities, agricultural products of local scope and medicine supply chains.Keywords: integrated management, logistic system, supply chain management, tactical-operative planning
Procedia PDF Downloads 152360 An Odyssey to Sustainability: The Urban Archipelago of India
Authors: B. Sudhakara Reddy
Abstract:
This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.Keywords: benchmark, city, indicator, performance, sustainability
Procedia PDF Downloads 269359 Multi-Criteria Geographic Information System Analysis of the Costs and Environmental Impacts of Improved Overland Tourist Access to Kaieteur National Park, Guyana
Authors: Mark R. Leipnik, Dahlia Durga, Linda Johnson-Bhola
Abstract:
Kaieteur is the most iconic National Park in the rainforest-clad nation of Guyana in South America. However, the magnificent 226-meter-high waterfall at its center is virtually inaccessible by surface transportation, and the occasional charter flights to the small airstrip in the park are too expensive for many tourists and residents. Thus, the largest waterfall in all of Amazonia, where the Potaro River plunges over a single free drop twice as high as Victoria Falls, remains preserved in splendid isolation inside a 57,000-hectare National Park established by the British in 1929, in the deepest recesses of a remote jungle canyon. Kaieteur Falls are largely unseen firsthand, but images of the falls are depicted on the Guyanese twenty dollar note, in every Guyanese tourist promotion, and on many items in the national capital of Georgetown. Georgetown is only 223-241 kilometers away from the falls. The lack of a single mileage figure demonstrates there is no single overland route. Any journey, except by air, involves changes of vehicles, a ferry ride, and a boat ride up a jungle river. It also entails hiking for many hours to view the falls. Surface access from Georgetown (or any city) is thus a 3-5 day-long adventure; even in the dry season, during the two wet seasons, travel is a particularly sticky proposition. This journey was made overland by the paper's co-author Dahlia Durga. This paper focuses on potential ways to improve overland tourist access to Kaieteur National Park from Georgetown. This is primarily a GIS-based analysis, using multiple criteria to determine the least cost means of creating all-weather road access to the area near the base of the falls while minimizing distance and elevation changes. Critically, it also involves minimizing the number of new bridges required to be built while utilizing the one existing ferry crossings of a major river. Cost estimates are based on data from road and bridge construction engineers operating currently in the interior of Guyana. The paper contains original maps generated with ArcGIS of the potential routes for such an overland connection, including the one deemed optimal. Other factors, such as the impact on endangered species habitats and Indigenous populations, are considered. This proposed infrastructure development is taking place at a time when Guyana is undergoing the largest boom in its history due to revenues from offshore oil and gas development. Thus, better access to the most important tourist attraction in the country is likely to happen eventually in some manner. But the questions of the most environmentally sustainable and least costly alternatives for such access remain. This paper addresses those questions and others related to access to this magnificent natural treasure and the tradeoffs such access will have on the preservation of the currently pristine natural environment of Kaieteur Falls.Keywords: nature tourism, GIS, Amazonia, national parks
Procedia PDF Downloads 163358 Catalytic Dehydrogenation of Formic Acid into H2/CO2 Gas: A Novel Approach
Authors: Ayman Hijazi, Witold Kwapinski, J. J. Leahy
Abstract:
Finding a sustainable alternative energy to fossil fuel is an urgent need as various environmental challenges in the world arise. Therefore, formic acid (FA) decomposition has been an attractive field that lies at the center of biomass platform, comprising a potential pool of hydrogen energy that stands as a new energy vector. Liquid FA features considerable volumetric energy density of 6.4 MJ/L and a specific energy density of 5.3 MJ/Kg that qualifies it in the prime seat as an energy source for transportation infrastructure. Additionally, the increasing research interest in FA decomposition is driven by the need of in-situ H2 production, which plays a key role in the hydrogenation reactions of biomass into higher value components. It is reported elsewhere in literature that catalytic decomposition of FA is usually performed in poorly designed setup using simple glassware under magnetic stirring, thus demanding further energy investment to retain the used catalyst. it work suggests an approach that integrates designing a novel catalyst featuring magnetic property with a robust setup that minimizes experimental & measurement discrepancies. One of the most prominent active species for dehydrogenation/hydrogenation of biomass compounds is palladium. Accordingly, we investigate the potential of engrafting palladium metal onto functionalized magnetic nanoparticles as a heterogeneous catalyst to favor the production of CO-free H2 gas from FA. Using ordinary magnet to collect the spent catalyst renders core-shell magnetic nanoparticles as the backbone of the process. Catalytic experiments were performed in a jacketed batch reactor equipped with an overhead stirrer under inert medium. Through a novel approach, FA is charged into the reactor via high-pressure positive displacement pump at steady state conditions. The produced gas (H2+CO2) was measured by connecting the gas outlet to a measuring system based on the amount of the displaced water. The novelty of this work lies in designing a very responsive catalyst, pumping consistent amount of FA into a sealed reactor running at steady state mild temperatures, and continuous gas measurement, along with collecting the used catalyst without the need for centrifugation. Catalyst characterization using TEM, XRD, SEM, and CHN elemental analyzer provided us with details of catalyst preparation and facilitated new venues to alter the nanostructure of the catalyst framework. Consequently, the introduction of amine groups has led to appreciable improvements in terms of dispersion of the doped metals and eventually attaining nearly complete conversion (100%) of FA after 7 hours. The relative importance of the process parameters such as temperature (35-85°C), stirring speed (150-450rpm), catalyst loading (50-200mgr.), and Pd doping ratio (0.75-1.80wt.%) on gas yield was assessed by a Taguchi design-of-experiment based model. Experimental results showed that operating at lower temperature range (35-50°C) yielded more gas while the catalyst loading and Pd doping wt.% were found to be the most significant factors with a P-values 0.026 & 0.031, respectively.Keywords: formic acid decomposition, green catalysis, hydrogen, mesoporous silica, process optimization, nanoparticles
Procedia PDF Downloads 50357 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 134356 Implementation of Real-World Learning Experiences in Teaching Courses of Medical Microbiology and Dietetics for Health Science Students
Authors: Miriam I. Jimenez-Perez, Mariana C. Orellana-Haro, Carolina Guzman-Brambila
Abstract:
As part of microbiology and dietetics courses, students of medicine and nutrition analyze the main pathogenic microorganisms and perform dietary analyzes. The course of microbiology describes in a general way the main pathogens including bacteria, viruses, fungi, and parasites, as well as their interaction with the human species. We hypothesize that lack of practical application of the course causes the students not to find the value and the clinical application of it when in reality it is a matter of great importance for healthcare in our country. The courses of the medical microbiology and dietetics are mostly theoretical and only a few hours of laboratory practices. Therefore, it is necessary the incorporation of new innovative techniques that involve more practices and community fieldwork, real cases analysis and real-life situations. The purpose of this intervention was to incorporate real-world learning experiences in the instruction of medical microbiology and dietetics courses, in order to improve the learning process, understanding and the application in the field. During a period of 6 months, medicine and nutrition students worked in a community of urban poverty. We worked with 90 children between 4 and 6 years of age from low-income families with no access to medical services, to give an infectious diagnosis related to nutritional status in these children. We expect that this intervention would give a different kind of context to medical microbiology and dietetics students improving their learning process, applying their knowledge and laboratory practices to help a needed community. First, students learned basic skills in microbiology diagnosis test during laboratory sessions. Once, students acquired abilities to make biochemical probes and handle biological samples, they went to the community and took stool samples from children (with the corresponding informed consent). Students processed the samples in the laboratory, searching for enteropathogenic microorganism with RapID™ ONE system (Thermo Scientific™) and parasites using Willis and Malloy modified technique. Finally, they compared the results with the nutritional status of the children, previously measured by anthropometric indicators. The anthropometric results were interpreted by the OMS Anthro software (WHO, 2011). The microbiological result was interpreted by ERIC® Electronic RapID™ Code Compendium software and validated by a physician. The results were analyses of infectious outcomes and nutritional status. Related to fieldwork community learning experiences, our students improved their knowledge in microbiology and were capable of applying this knowledge in a real-life situation. They found this kind of learning useful when they translate theory to a real-life situation. For most of our students, this is their first contact as health caregivers with real population, and this contact is very important to help them understand the reality of many people in Mexico. In conclusion, real-world or fieldwork learning experiences empower our students to have a real and better understanding of how they can apply their knowledge in microbiology and dietetics and help a much- needed population, this is the kind of reality that many people live in our country.Keywords: real-world learning experiences, medical microbiology, dietetics, nutritional status, infectious status.
Procedia PDF Downloads 131355 Weaving Social Development: An Exploratory Study of Adapting Traditional Textiles Using Indigenous Organic Wool for the Modern Interior Textiles Market
Authors: Seema Singh, Puja Anand, Alok Bhasin
Abstract:
The interior design profession aims to create aesthetically pleasing design solutions for human habitats but of late, growing awareness about depleting environmental resources, both tangible and intangible, and damages to the eco-system led to the quest for creating healthy and sustainable interior environments. The paper proposes adapting traditionally produced organic wool textiles for the mainstream interior design industry. This can create sustainable livelihoods whereby eco-friendly bridges can be built between Interior designers and consumers and pastoral communities. This study focuses on traditional textiles produced by two pastoral communities from India that use organic wool from indigenous sheep varieties. The Gaddi communities of Himachal Pradesh use wool from the Gaddi sheep breed to create Pattu (a multi-purpose textile). The Kurumas of Telangana weave a blanket called the Gongadi, using wool from the Black Deccani variety of sheep. These communities have traditionally reared indigenous sheep breeds for their wool and produce hand-spun and hand-woven textiles for their own consumption, using traditional processes that are chemical free. Based on data collected personally from field visits and documentation of traditional crafts of these pastoral communities, and using traditionally produced indigenous organic wool, the authors have developed innovative textile samples by including design interventions and exploring dyeing and weaving techniques. As part of the secondary research, the role of pastoralism in sustaining the eco-systems of Himachal Pradesh and Telangana was studied, and also the role of organic wool in creating healthy interior environments. The authors found that natural wool from indigenous sheep breeds can be used to create interior textiles that have the potential to be marketed to an urban audience, and this will help create earnings for pastoral communities. Literature studies have shown that organic & sustainable wool can reduce indoor pollution & toxicity levels in interiors and further help in creating healthier interior environments. Revival of indigenous breeds of sheep can further help in rejuvenating dying crafts, and promotion of these indigenous textiles can help in sustaining traditional eco-systems and the pastoral communities whose way of life is endangered today. Based on research and findings, the authors propose that adapting traditional textiles can have potential for application in Interiors, creating eco-friendly spaces. Interior textiles produced through such sustainable processes can help reduce indoor pollution, give livelihood opportunities to traditional economies, and leave almost zero carbon foot-print while being in sync with available natural resources, hence ultimately benefiting the society. The win-win situation for all the stakeholders in this eco-friendly model makes it pertinent to re-think how we design lifestyle textiles for interiors. This study illustrates a specific example from the two pastoral communities and can be used as a model that can work equally well in any community, regardless of geography.Keywords: design intervention, eco- friendly, healthy interiors, indigenous, organic wool, pastoralism, sustainability
Procedia PDF Downloads 161354 Towards a Better Understanding of Planning for Urban Intensification: Case Study of Auckland, New Zealand
Authors: Wen Liu, Errol Haarhoff, Lee Beattie
Abstract:
In 2010, New Zealand’s central government re-organise the local governments arrangements in Auckland, New Zealand by amalgamating its previous regional council and seven supporting local government units into a single unitary council, the Auckland Council. The Auckland Council is charged with providing local government services to approximately 1.5 million people (a third of New Zealand’s total population). This includes addressing Auckland’s strategic urban growth management and setting its urban planning policy directions for the next 40 years. This is expressed in the first ever spatial plan in the region – the Auckland Plan (2012). The Auckland plan supports implementing a compact city model by concentrating the larger part of future urban growth and development in, and around, existing and proposed transit centres, with the intention of Auckland to become globally competitive city and achieving ‘the most liveable city in the world’. Turning that vision into reality is operatized through the statutory land use plan, the Auckland Unitary Plan. The Unitary plan replaced the previous regional and local statutory plans when it became operative in 2016, becoming the ‘rule book’ on how to manage and develop the natural and built environment, using land use zones and zone standards. Common to the broad range of literature on urban growth management, one significant issue stands out about intensification. The ‘gap’ between strategic planning and what has been achieved is evident in the argument for the ‘compact’ urban form. Although the compact city model may have a wide range of merits, the extent to which these are actualized largely rely on how intensification actually is delivered. The transformation of the rhetoric of the residential intensification model into reality is of profound influence, yet has enjoyed limited empirical analysis. In Auckland, the establishment of the Auckland Plan set up the strategies to deliver intensification into diversified arenas. Nonetheless, planning policy itself does not necessarily achieve the envisaged objectives, delivering the planning system and high capacity to enhance and sustain plan implementation is another demanding agenda. Though the Auckland Plan provides a wide ranging strategic context, its actual delivery is beholden on the Unitary Plan. However, questions have been asked if the Unitary Plan has the necessary statutory tools to deliver the Auckland Plan’s policy outcomes. In Auckland, there is likely to be continuing tension between the strategies for intensification and their envisaged objectives, and made it doubtful whether the main principles of the intensification strategies could be realized. This raises questions over whether the Auckland Plan’s policy goals can be achieved in practice, including delivering ‘quality compact city’ and residential intensification. Taking Auckland as an example of traditionally sprawl cities, this article intends to investigate the efficacy plan making and implementation directed towards higher density development. This article explores the process of plan development, plan making and implementation frameworks of the first ever spatial plan in Auckland, so as to explicate the objectives and processes involved, and consider whether this will facilitate decision making processes to realize the anticipated intensive urban development.Keywords: urban intensification, sustainable development, plan making, governance and implementation
Procedia PDF Downloads 555353 Assessment of Potential Chemical Exposure to Betamethasone Valerate and Clobetasol Propionate in Pharmaceutical Manufacturing Laboratories
Authors: Nadeen Felemban, Hamsa Banjer, Rabaah Jaafari
Abstract:
One of the most common hazards in the pharmaceutical industry is the chemical hazard, which can cause harm or develop occupational health diseases/illnesses due to chronic exposures to hazardous substances. Therefore, a chemical agent management system is required, including hazard identification, risk assessment, controls for specific hazards and inspections, to keep your workplace healthy and safe. However, routine management monitoring is also required to verify the effectiveness of the control measures. Moreover, Betamethasone Valerate and Clobetasol Propionate are some of the APIs (Active Pharmaceutical Ingredients) with highly hazardous classification-Occupational Hazard Category (OHC 4), which requires a full containment (ECA-D) during handling to avoid chemical exposure. According to Safety Data Sheet, those chemicals are reproductive toxicants (reprotoxicant H360D), which may affect female workers’ health and cause fatal damage to an unborn child, or impair fertility. In this study, qualitative (chemical Risk assessment-qCRA) was conducted to assess the chemical exposure during handling of Betamethasone Valerate and Clobetasol Propionate in pharmaceutical laboratories. The outcomes of qCRA identified that there is a risk of potential chemical exposure (risk rating 8 Amber risk). Therefore, immediate actions were taken to ensure interim controls (according to the Hierarchy of controls) are in place and in use to minimize the risk of chemical exposure. No open handlings should be done out of the Steroid Glove Box Isolator (SGB) with the required Personal Protective Equipment (PPEs). The PPEs include coverall, nitrile hand gloves, safety shoes and powered air-purifying respirators (PAPR). Furthermore, a quantitative assessment (personal air sampling) was conducted to verify the effectiveness of the engineering controls (SGB Isolator) and to confirm if there is chemical exposure, as indicated earlier by qCRA. Three personal air samples were collected using an air sampling pump and filter (IOM2 filters, 25mm glass fiber media). The collected samples were analyzed by HPLC in the BV lab, and the measured concentrations were reported in (ug/m3) with reference to Occupation Exposure Limits, 8hr OELs (8hr TWA) for each analytic. The analytical results are needed in 8hr TWA (8hr Time-weighted Average) to be analyzed using Bayesian statistics (IHDataAnalyst). The results of the Bayesian Likelihood Graph indicate (category 0), which means Exposures are de "minimus," trivial, or non-existent Employees have little to no exposure. Also, these results indicate that the 3 samplings are representative samplings with very low variations (SD=0.0014). In conclusion, the engineering controls were effective in protecting the operators from such exposure. However, routine chemical monitoring is required every 3 years unless there is a change in the processor type of chemicals. Also, frequent management monitoring (daily, weekly, and monthly) is required to ensure the control measures are in place and in use. Furthermore, a Similar Exposure Group (SEG) was identified in this activity and included in the annual health surveillance for health monitoring.Keywords: occupational health and safety, risk assessment, chemical exposure, hierarchy of control, reproductive
Procedia PDF Downloads 169352 The Effects of the GAA15 (Gaelic Athletic Association 15) on Lower Extremity Injury Incidence and Neuromuscular Functional Outcomes in Collegiate Gaelic Games: A 2 Year Prospective Study
Authors: Brenagh E. Schlingermann, Clare Lodge, Paula Rankin
Abstract:
Background: Gaelic football, hurling and camogie are highly popular field games in Ireland. Research into the epidemiology of injury in Gaelic games revealed that approximately three quarters of the injuries in the games occur in the lower extremity. These injuries can have player, team and institutional impacts due to multiple factors including financial burden and time loss from competition. Research has shown it is possible to record injury data consistently with the GAA through a closed online recording system known as the GAA injury surveillance database. It has been established that determining the incidence of injury is the first step of injury prevention. The goals of this study were to create a dynamic GAA15 injury prevention programme which addressed five key components/goals; avoid positions associated with a high risk of injury, enhance flexibility, enhance strength, optimize plyometrics and address sports specific agilities. These key components are internationally recognized through the Prevent Injury, Enhance performance (PEP) programme which has proven reductions in ACL injuries by 74%. In national Gaelic games the programme is known as the GAA15 which has been devised from the principles of the PEP. No such injury prevention strategies have been published on this cohort in Gaelic games to date. This study will investigate the effects of the GAA15 on injury incidence and neuromuscular function in Gaelic games. Methods: A total of 154 players (mean age 20.32 ± 2.84) were recruited from the GAA teams within the Institute of Technology Carlow (ITC). Preseason and post season testing involved two objective screening tests; Y balance test and Three Hop Test. Practical workshops, with ongoing liaison, were provided to the coaches on the implementation of the GAA15. The programme was performed before every training session and game and the existing GAA injury surveillance database was accessed to monitor player’s injuries by the college sports rehabilitation athletic therapist. Retrospective analysis of the ITC clinic records were performed in conjunction with the database analysis as a means of tracking injuries that may have been missed. The effects of the programme were analysed by comparing the intervention groups Y balance and three hop test scores to an age/gender matched control group. Results: Year 1 results revealed significant increases in neuromuscular function as a result of the GAA15. Y Balance test scores for the intervention group increased in both the posterolateral (p=.005 and p=.001) and posteromedial reach directions (p= .001 and p=.001). A decrease in performance was determined for the three hop test (p=.039). Overall twenty-five injuries were reported during the season resulting in an injury rate of 3.00 injuries/1000hrs of participation; 1.25 injuries/1000hrs training and 4.25 injuries/1000hrs match play. Non-contact injuries accounted for 40% of the injuries sustained. Year 2 results are pending and expected April 2016. Conclusion: It is envisaged that implementation of the GAA15 will continue to reduce the risk of injury and improve neuromuscular function in collegiate Gaelic games athletes.Keywords: GAA15, Gaelic games, injury prevention, neuromuscular training
Procedia PDF Downloads 336