Search results for: mobile standards
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3456

Search results for: mobile standards

276 Reflective Portfolio to Bridge the Gap in Clinical Training

Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab

Abstract:

Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.

Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education

Procedia PDF Downloads 60
275 Examining the Critical Factors for Success and Failure of Common Ticketing Systems

Authors: Tam Viet Hoang

Abstract:

With a plethora of new mobility services and payment systems found in our cities and across modern public transportation systems, several cities globally have turned to common ticketing systems to help navigate this complexity. Helping to create time and space-differentiated fare structures and tariff schemes, common ticketing systems can optimize transport utilization rates, achieve cost efficiencies, and provide key incentives to specific target groups. However, not all cities and transportation systems have enjoyed a smooth journey towards the adoption, roll-out, and servicing of common ticketing systems, with both the experiences of success and failure being attributed to a wide variety of critical factors. Using case study research as a methodology and cities as the main unit of analysis, this research will seek to address the fundamental question of “what are the critical factors for the success and failure of common ticketing systems?” Using rail/train systems as the entry point for this study will start by providing a background to the evolution of transport ticketing and justify the improvements in operational efficiency that can be achieved through common ticketing systems. Examining the socio-economic benefits of common ticketing, the research will also help to articulate the value derived for different key identified stakeholder groups. By reviewing case studies of the implementation of common ticketing systems in different cities, the research will explore lessons learned from cities with the aim to elicit factors to ensure seamless connectivity integrated e-ticketing platforms. In an increasingly digital age and where cities are now coming online, this paper seeks to unpack these critical factors, undertaking case study research drawing from literature and lived experiences. Offering us a better understanding of the enabling environment and ideal mixture of ingredients to facilitate the successful roll-out of a common ticketing system, interviews will be conducted with transport operators from several selected cities to better appreciate the challenges and strategies employed to overcome those challenges in relation to common ticketing systems. Meanwhile, as we begin to see the introduction of new mobile applications and user interfaces to facilitate ticketing and payment as part of the transport journey, we take stock of numerous policy challenges ahead and implications on city-wide and system-wide urban planning. It is hoped that this study will help to identify the critical factors for the success and failure of common ticketing systems for cities set to embark on their implementation while serving to fine-tune processes in those cities where common ticketing systems are already in place. Outcomes from the study will help to facilitate an improved understanding of common pitfalls and essential milestones towards the roll-out of a common ticketing system for railway systems, especially for emerging countries where mass rapid transit transport systems are being considered or in the process of construction.

Keywords: common ticketing, public transport, urban strategies, Bangkok, Fukuoka, Sydney

Procedia PDF Downloads 56
274 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 95
273 Reuse of Historic Buildings for Tourism: Policy Gaps

Authors: Joseph Falzon, Margaret Nelson

Abstract:

Background: Regeneration and re-use of abandoned historic buildings present a continuous challenge for policy makers and stakeholders in the tourism and leisure industry. Obsolete historic buildings provide great potential for tourism and leisure accommodation, presenting unique heritage experiences to travellers and host communities. Contemporary demands in the hospitality industry continuously require higher standards, some of which are in conflict with heritage conservation principles. Objective: The aim of this research paper is to critically discuss regeneration policies with stakeholders of the tourism and leisure industry and to examine current practices in policy development and the resultant impact of policies on the Maltese tourism and leisure industry. Research Design: Six semi-structured interviews with stakeholders involved in the tourism and leisure industry participated in the research. A number of measures were taken to reduce bias and thus improve trustworthiness. Clear statements of the purpose of the research study were provided at the start of each interview to reduce expectancy bias. The interviews were semi-structured to minimise interviewer bias. Interviewees were allowed to expand and elaborate as necessary, with only necessary probing questions, to allow free expression of opinion and practices. Interview guide was submitted to participants at least two weeks before the interview to allow participants to prepare for the interview and prevent recall bias during the interview as much as possible. Interview questions and probes contained both positive and negative aspects to prevent interviewer bias. Policy documents were available during the interview to prevent recall bias. Interview recordings were transcribed ‘intelligent’ verbatim. Analysis was carried out using thematic analysis with the coding frame developed independently by two researchers. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: financing of regeneration, governance, legislation and policies. Other key issues included value of historic buildings and approaches for regeneration. Whist regeneration of historic buildings was noted, participants discussed a number of barriers that hindered regeneration. Stakeholders identified gaps in policies and gaps at policy implementation stages. European Union funding policies facilitated regeneration initiatives but funding criteria based on economic deliverables presented the intangible heritage gap. Stakeholders identified niche markets for heritage tourism accommodation. Lack of research-based policies was also identified. Conclusion: Potential of regeneration is hindered by inadequate legal framework that supports contemporary needs of the tourism industry. Policies should be developed by active stakeholder participation. Adequate funding schemes have to support the tangible and intangible components of the built heritage.

Keywords: governance, historic buildings, policy, tourism

Procedia PDF Downloads 212
272 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries

Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut

Abstract:

Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.

Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery

Procedia PDF Downloads 203
271 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation

Authors: Bill D. Geis

Abstract:

Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.

Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death

Procedia PDF Downloads 43
270 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.

Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation

Procedia PDF Downloads 218
269 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety

Authors: Neeti Nayak, Khalid Duri

Abstract:

Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.

Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design

Procedia PDF Downloads 85
268 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients

Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado

Abstract:

Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.

Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off

Procedia PDF Downloads 166
267 Changing from Crude (Rudimentary) to Modern Method of Cassava Processing in the Ngwo Village of Njikwa Sub Division of North West Region of Cameroon

Authors: Loveline Ambo Angwah

Abstract:

The processing of cassava from tubers or roots into food using crude and rudimentary method (hand peeling, grating, frying and to sun drying) is a very cumbersome and difficult process. The crude methods are time consuming and labour intensive. While on the other hand, modern processing method, that is using machines to perform the various processes as washing, peeling, grinding, oven drying, fermentation and frying is easier, less time consuming, and less labour intensive. Rudimentarily, cassava roots are processed into numerous products and utilized in various ways according to local customs and preferences. For the people of Ngwo village, cassava is transformed locally into flour or powder form called ‘cumcum’. It is also sucked into water to give a kind of food call ‘water fufu’ and fried to give ‘garri’. The leaves are consumed as vegetables. Added to these, its relative high yields; ability to stay underground after maturity for long periods give cassava considerable advantage as a commodity that is being used by poor rural folks in the community, to fight poverty. It plays a major role in efforts to alleviate the food crisis because of its efficient production of food energy, year-round availability, tolerance to extreme stress conditions, and suitability to present farming and food systems in Africa. Improvement of cassava processing and utilization techniques would greatly increase labor efficiency, incomes, and living standards of cassava farmers and the rural poor, as well as enhance the-shelf life of products, facilitate their transportation, increase marketing opportunities, and help improve human and livestock nutrition. This paper presents a general overview of crude ways in cassava processing and utilization methods now used by subsistence and small-scale farmers in Ngwo village of the North West region in Cameroon, and examine the opportunities of improving processing technologies. Cassava needs processing because the roots cannot be stored for long because they rot within 3-4 days of harvest. They are bulky with about 70% moisture content, and therefore transportation of the tubers to markets is difficult and expensive. The roots and leaves contain varying amounts of cyanide which is toxic to humans and animals, while the raw cassava roots and uncooked leaves are not palatable. Therefore, cassava must be processed into various forms in order to increase the shelf life of the products, facilitate transportation and marketing, reduce cyanide content and improve palatability.

Keywords: cassava roots, crude ways, food system, poverty

Procedia PDF Downloads 143
266 Malaysia as a Case Study for Climate Policy Integration into Energy Policy

Authors: Marcus Lee

Abstract:

The energy sector is the largest contributor of greenhouse gas emissions in Malaysia, which induces climate change. The climate change problem is therefore an energy sector problem. Tackling climate change issues successfully is contingent on actions taken in the energy sector. The researcher propounds that ‘Climate Policy Integration’ (CPI) into energy policy is a viable and insufficiently developed strategy in Malaysia that promotes the synergies between climate change and energy objectives, in order to achieve the targets found in both climate change and energy policies. In exploring this hypothesis, this paper presentation will focus on two particular aspects. Firstly, the meaning of CPI as an approach and as a concept will be explored. As an approach, CPI into energy policy means the integration of climate change objectives into the energy policy area. Its subject matter focuses on establishing the functional interrelations between climate change and energy objectives, by promoting their synergies and minimising their contradictions. However, its conceptual underpinnings are less than straightforward. Drawing from the ‘principle of integration’ found in international treaties and declarations such as the Stockholm Declaration 1972, the Rio Declaration 1992 and the United Nations Framework on Climate Change 1992 (‘UNFCCC’), this paper presentation will explore the contradictions in international standards on how the sustainable development tenets of environmental sustainability, social development and economic development are to be balanced and its relevance to CPI. Further, the researcher will consider whether authority may be derived from international treaties and declarations in order to argue for the prioritisation of environmental sustainability over the other sustainable development tenets through CPI. Secondly, this paper presentation will also explore the degree to which CPI into energy policy has been achieved and pursued in Malaysia. In particular, the strength of the conceptual framework with regard to CPI in Malaysian governance will be considered by assessing Malaysia’s National Policy on Climate Change (2009) (‘NPCC 2009’). The development (or the lack of) of CPI as an approach since the publication of the NPCC 2009 will also be assessed based on official government documents and policies that may have a climate change and/or energy agenda. Malaysia’s National Renewable Energy Policy and Action Plan (2010), draft National Energy Efficiency Action Plan (2014), Intended Nationally Determined Contributions (2015) in relation to the Paris Agreement, 11th Malaysia Plan (2015) and Biennial Update Report to the UNFCCC (2015) will be discussed. These documents will be assessed for the presence of CPI based on the language/drafting of the documents as well as the degree of subject matter regarding CPI expressed in the documents. Based on the analysis, the researcher will propose solutions on how to improve Malaysia’s climate change and energy governance. The theory of reflexive governance will be applied to CPI. The concluding remarks will be about whether CPI reflects reflexive governance by demonstrating how the governance process can be the object of shaping outcomes.

Keywords: climate policy integration, mainstreaming, policy coherence, Malaysian energy governance

Procedia PDF Downloads 167
265 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective

Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan

Abstract:

Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.

Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence

Procedia PDF Downloads 410
264 The International Prohibition of Religiously-Motivated 'Incitement' to Violence

Authors: J. D. Temperman

Abstract:

Introduction: In particular, in relation to religion, the meaning and scope of freedom of expression have been tested in recent times. This paper investigates the legal justifications for restrictions that have been suggested in this area and asks whether they are sustainable from an international human rights perspective. The universal human rights instruments, particularly the UN International Covenant on Civil and Political Rights (ICCPR), are increasingly geared towards eradicating ‘incitement’ to contingent harms like violence or discrimination, whilst forms of extreme speech that fall short of such incitement are to be protected rather than countered by states. Human Rights Committee’s draft-General Comment on freedom of expression, adopted in 2011, provides another strong indication that this is the envisaged way forward: repealing anti-blasphemy and anti-religious defamation laws, whilst simultaneously increasing efforts to combat ‘incitement’. Within regional human rights frameworks, notably the European Convention system, judgments have in fact supported legal restrictions on both hate speech, holocaust denial, and blasphemy or religious defamation. Major contributions to scholarship: This paper proposes an actus reus for the offense of ‘advocacy of religious hatred that constitutes incitement to discrimination or violence’, as enshrined in Article 20(2) of the UN ICCPR. In underscoring the high threshold of ‘incitement’, the author distinguishes this offense from such notions as ‘blasphemy’ or ‘defamation of religions’. In addition to treating the said provision as a sui generis prohibition, the question is addresses whether a ‘right to be protected against incitement’ may be distilled from the ICCPR. Furthermore, the author will discuss the question of how to judge incitement; notably, is mens rea required to convict someone of incitement, and if so, what degree of mens rea? This analysis also includes the question how to balance content and context factors when addressing alleged instances of incitement, notably what factors make provide for a likelihood that imminent acts of violence or discrimination will ensue from an inciteful speech act? Methodology: This paper takes a double comparative approach: (i) it endeavours to compare and contrast monitoring bodies’ approach to incitement (notably, the UN Human Rights Committee, but also the UN Committee on the Elimination of Racial Discrimination which monitors states’ compliance with Article 4 of ICERD on incitement); and (ii) it endeavours to chart and compare and analyse from an international human rights perspective recent forms of state practice in the field of dealing with incitement (i.e. a comparative legal analysis and vertical human rights analysis of newly emerging incitement legislation in the light of the said international standards). Conclusion: This paper conceptualizes a legal notion – ‘incitement’ – encapsulated in international human rights law that may have a profound bearing on contemporary challenges of radicalization and religious strife.

Keywords: incitement, international human rights law, religious hatred, violence

Procedia PDF Downloads 288
263 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, Infill wall, Infilled frame, masonry wall

Procedia PDF Downloads 57
262 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents

Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme

Abstract:

The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.

Keywords: wine, ota removal, food safety, fining

Procedia PDF Downloads 505
261 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit

Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar

Abstract:

Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.

Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction

Procedia PDF Downloads 76
260 Exploring the Career Experiences of Internationally Recruited Nurses at the Royal Berkshire NHS Foundation Trust

Authors: Natalie Preville, Carlos Joel Mejia-Olivares

Abstract:

In the UK, since the early 1950s when the NHS was founded, international staff in the NHS have played an important role. Currently, they represent 16% of the workforce within the NHS in the UK. Furthermore, to address the shortfalls in nursing staff, international recruitment programs have been essential to reduce the gaps in the UK nursing workforce over the last two decades. The NHS Long Term Plan (2019) aims to have a significant reduction of nursing vacancies to 5% by 2028. However, in 2021 and 2022, Workforce Race Equality Standards (WRES) reports stated that there is inequitable Career Progression (CP) among Internationally Recruited (IR) nurses as compared to British counterparts. In addition, there is sufficient literature exploring the motives and lived experiences of IR nurses, which underpins the findings. Therefore, the overall aim of this report is to conduct a scoping project to understand the experiences of the IR nurses who joined the NHS in the South East of England within the last 5 years. Methodology- This document is based on the data from a survey developed by Royal Berkshire NHS Foundation Trust using Microsoft forms and consisted of 23 questions divided into four themes, staff background, career experience, career progression and future career plans within Royal Berkshire NHS Foundation Trust. The descriptive analysis provided the initial analysis of the quantitative data. As a result, 44 responses were collected and evaluated by utilising Microsoft excel. Key findings: Career experiences; 72% of respondents felt that their current role was a good fit, and in a subsequent question, the main reason cited was having “relevant skills”. This indicates that, for the most part, the prior experience of IR nurses is a large factor in their placement, which is viewed positively; the next step is to effectively apply similar relevance in aligning prior experience with career progression opportunities. Moreover, 67% of respondents feel valued by the department/team, which is a great reflection of the values of the Trust being demonstrated towards IR Nurses. However, further studies may be necessary to explore the reasons why the remaining 33% may not feel valued; this can include having a better understanding of cultural perceptions of value. Perceived Barriers: Although 37% of respondents had been promoted since commencing employment with the Trust, the data indicates that there is still room for CP opportunities, as it is the leading barrier reported by the respondents. Secondly, the growing mix of cultures within the nursing workforce gives the appearance of inclusion. However, this is not the experience of some IR nurses. Conclusion statemen: Survey results indicate that this NHS Trust has an excellent foundation to integrate international nurses into their workforce with scope for career progression in a reasonable timeframe. However, it would be recommendable to include fast-tracking career promotions by recognizing previous studies and professional experience. Further exploration of staff career experiences and goals may provide additional useful data for future planning.

Keywords: career progression, International nurses, perceived barriers, staff survey

Procedia PDF Downloads 54
259 A Qualitative Study to Analyze Clinical Coders’ Decision Making Process of Adverse Drug Event Admissions

Authors: Nisa Mohan

Abstract:

Clinical coding is a feasible method for estimating the national prevalence of adverse drug event (ADE) admissions. However, under-coding of ADE admissions is a limitation of this method. Whilst the under-coding will impact the accurate estimation of the actual burden of ADEs, the feasibility of the coded data in estimating the adverse drug event admissions goes much further compared to the other methods. Therefore, it is necessary to know the reasons for the under-coding in order to improve the clinical coding of ADE admissions. The ability to identify the reasons for the under-coding of ADE admissions rests on understanding the decision-making process of coding ADE admissions. Hence, the current study aimed to explore the decision-making process of clinical coders when coding cases of ADE admissions. Clinical coders from different levels of coding job such as trainee, intermediate and advanced level coders were purposefully selected for the interviews. Thirteen clinical coders were recruited from two Auckland region District Health Board hospitals for the interview study. Semi-structured, one-on-one, face-to-face interviews using open-ended questions were conducted with the selected clinical coders. Interviews were about 20 to 30 minutes long and were audio-recorded with the approval of the participants. The interview data were analysed using a general inductive approach. The interviews with the clinical coders revealed that the coders have targets to meet, and they sometimes hesitate to adhere to the coding standards. Coders deviate from the standard coding processes to make a decision. Coders avoid contacting the doctors for clarifying small doubts such as ADEs and the name of the medications because of the delay in getting a reply from the doctors. They prefer to do some research themselves or take help from their seniors and colleagues for making a decision because they can avoid a long wait to get a reply from the doctors. Coders think of ADE as a small thing. Lack of time for searching for information to confirm an ADE admission, inadequate communication with clinicians, along with coders’ belief that an ADE is a small thing may contribute to the under-coding of the ADE admissions. These findings suggest that further work is needed on interventions to improve the clinical coding of ADE admissions. Providing education to coders about the importance of ADEs, educating clinicians about the importance of clear and confirmed medical records entries, availing pharmacists’ services to improve the detection and clear documentation of ADE admissions, and including a mandatory field in the discharge summary about external causes of diseases may be useful for improving the clinical coding of ADE admissions. The findings of the research will help the policymakers to make informed decisions about the improvements. This study urges the coding policymakers, auditors, and trainers to engage with the unconscious cognitive biases and short-cuts of the clinical coders. This country-specific research conducted in New Zealand may also benefit other countries by providing insight into the clinical coding of ADE admissions and will offer guidance about where to focus changes and improvement initiatives.

Keywords: adverse drug events, clinical coders, decision making, hospital admissions

Procedia PDF Downloads 96
258 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 258
257 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil

Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins

Abstract:

Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.

Keywords: pesticides monitoring, são paulo state, water quality, surface waters

Procedia PDF Downloads 37
256 Comparison of Two Home Sleep Monitors Designed for Self-Use

Authors: Emily Wood, James K. Westphal, Itamar Lerner

Abstract:

Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.

Keywords: DREEM, EEG, seep monitoring, Z-machine

Procedia PDF Downloads 82
255 Interactivity as a Predictor of Intent to Revisit Sports Apps

Authors: Young Ik Suh, Tywan G. Martin

Abstract:

Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.

Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit

Procedia PDF Downloads 129
254 Changes of Chemical Composition and Physicochemical Properties of Banana during Ethylene-Induced Ripening

Authors: Chiun-C.R. Wang, Po-Wen Yen, Chien-Chun Huang

Abstract:

Banana is produced in large quantities in tropical and subtropical areas. Banana is one of the important fruits which constitute a valuable source of energy, vitamins and minerals. The ripening and maturity standards of banana vary from country to country depending on the expected shelf life of market. The compositions of bananas change dramatically during ethylene-induced ripening that are categorized as nutritive values and commercial utilization. Nevertheless, there is few study reporting the changes of physicochemical properties of banana starch during ethylene-induced ripening of green banana. The objectives of this study were to investigate the changes of chemical composition and enzyme activity of banana and physicochemical properties of banana starch during ethylene-induced ripening. Green bananas were harvested and ripened by ethylene gas at low temperature (15℃) for seven stages. At each stage, banana was sliced and freeze-dried for banana flour preparation. The changes of total starch, resistant starch, chemical compositions, physicochemical properties, activity of amylase, polyphenolic oxidase (PPO) and phenylalanine ammonia lyase (PAL) of banana were analyzed each stage during ripening. The banana starch was isolated and analyzed for gelatinization properties, pasting properties and microscopic appearance each stage of ripening. The results indicated that the highest total starch and resistant starch content of green banana were 76.2% and 34.6%, respectively at the harvest stage. Both total starch and resistant starch content were significantly declined to 25.3% and 8.8%, respectively at the seventh stage. Soluble sugars content of banana increased from 1.21% at harvest stage to 37.72% at seventh stage during ethylene-induced ripening. Swelling power of banana flour decreased with the progress of ripening stage, but solubility increased. These results strongly related with the decreases of starch content of banana flour during ethylene-induced ripening. Both water insoluble and alcohol insoluble solids of banana flour decreased with the progress of ripening stage. Both activity of PPO and PAL increased, but the total free phenolics content decreased, with the increases of ripening stages. As ripening stage extended, the gelatinization enthalpy of banana starch significantly decreased from 15.31 J/g at the harvest stage to 10.55 J/g at the seventh stage. The peak viscosity and setback increased with the progress of ripening stages in the pasting properties of banana starch. The highest final viscosity, 5701 RVU, of banana starch slurry was found at the seventh stage. The scanning electron micrograph of banana starch showed the shapes of banana starch appeared to be round and elongated forms, ranging in 10-50 μm at the harvest stage. As the banana closed to ripe status, some parallel striations were observed on the surface of banana starch granular which could be caused by enzyme reaction during ripening. These results inferred that the highest resistant starch was found in the green banana at the harvest stage could be considered as a potential application of healthy foods. The changes of chemical composition and physicochemical properties of banana could be caused by the hydrolysis of enzymes during the ethylene-induced ripening treatment.

Keywords: ethylene-induced ripening, banana starch, resistant starch, soluble sugars, physicochemical properties, gelatinization enthalpy, pasting characteristics, microscopic appearance

Procedia PDF Downloads 450
253 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 612
252 The Role of People in Continuing Airworthiness: A Case Study Based on the Royal Thai Air Force

Authors: B. Ratchaneepun, N.S. Bardell

Abstract:

It is recognized that people are the main drivers in almost all the processes that affect airworthiness assurance. This is especially true in the area of aircraft maintenance, which is an essential part of continuing airworthiness. This work investigates what impact English language proficiency, the intersection of the military and Thai cultures, and the lack of initial and continuing human factors training have on the work performance of maintenance personnel in the Royal Thai Air Force (RTAF). A quantitative research method based on a cross-sectional survey was used to gather data about these three key aspects of “people” in a military airworthiness environment. 30 questions were developed addressing the crucial topics of English language proficiency, impact of culture, and human factors training. The officers and the non-commissioned officers (NCOs) who work for the Aeronautical Engineering Divisions in the RTAF comprised the survey participants. The survey data were analysed to support various hypotheses by using a t-test method. English competency in the RTAF is very important since all of the service manuals for Thai military aircraft are written in English. Without such competency, it is difficult for maintenance staff to perform tasks and correctly interpret the relevant maintenance manual instructions; any misunderstandings could lead to potential accidents. The survey results showed that the officers appreciated the importance of this more than the NCOs, who are the people actually doing the hands-on maintenance work. Military culture focuses on the success of a given mission, and leverages the power distance between the lower and higher ranks. In Thai society, a power distance also exists between younger and older citizens. In the RTAF, such a combination tends to inhibit a just reporting culture and hence hinders safety. The survey results confirmed this, showing that the older people and higher ranks involved with RTAF aircraft maintenance believe that the workplace has a positive safety culture and climate, whereas the younger people and lower ranks think the opposite. The final area of consideration concerned human factors training and non-technical skills training. The survey revealed that those participants who had previously attended such courses appreciated its value and were aware of its benefits in daily life. However, currently there is no regulation in the RTAF to mandate recurrent training to maintain such knowledge and skills. The findings from this work suggest that the people involved in assuring the continuing airworthiness of the RTAF would benefit from: (i) more rigorous requirements and standards in the recruitment, initial training and continuation training regarding English competence; (ii) the development of a strong safety culture that exploits the uniqueness of both the military culture and the Thai culture; and (iii) providing more initial and recurrent training in human factors and non-technical skills.

Keywords: aircraft maintenance, continuing airworthiness, military culture, people, Royal Thai Air Force

Procedia PDF Downloads 106
251 How to Assess the Attractiveness of Business Location According to the Mainstream Concepts of Comparative Advantages

Authors: Philippe Gugler

Abstract:

Goal of the study: The concept of competitiveness has been addressed by economic theorists and policymakers for several hundreds of years, with both groups trying to understand the drivers of economic prosperity and social welfare. The goal of this contribution is to address the major useful theoretical contributions that permit to identify the main drivers of a territory’s competitiveness. We first present the major contributions found in the classical and neo-classical theories. Then, we concentrate on two majors schools providing significant thoughts on the competitiveness of locations: the Economic Geography (EG) School and the International Business (IB) School. Methodology: The study is based on a literature review of the classical and neo-classical theories, on the economic geography theories and on the international business theories. This literature review establishes links between these theoretical mainstreams. This work is based on the academic framework establishing a meaningful literature review aimed to respond to our research question and to develop further research in this field. Results: The classical and neo-classical pioneering theories provide initial insights that territories are different and that these differences explain the discrepancies in their levels of prosperity and standards of living. These theories emphasized different factors impacting the level and the growth of productivity in a given area and therefore the degree of their competitiveness. However, these theories are not sufficient to more precisely identify the drivers and enablers of location competitiveness and to explain, in particular, the factors that drive the creation of economic activities, the expansion of economic activities, the creation of new firms and the attraction of foreign firms. Prosperity is due to economic activities created by firms. Therefore, we need more theoretical insights to scrutinize the competitive advantages of territories or, in other words, their ability to offer the best conditions that enable economic agents to achieve higher rates of productivity in open markets. Two major theories provide, to a large extent, the needed insights: the economic geography theory and the international business theory. The economic geography studies scrutinized in this study from Marshall to Porter, aim to explain the drivers of the concentration of specific industries and activities in specific locations. These activity agglomerations may be due to the creation of new enterprises, the expansion of existing firms, and the attraction of firms located elsewhere. Regarding this last possibility, the international business (IB) theories focus on the comparative advantages of locations as far as multinational enterprises (MNEs) strategies are concerned. According to international business theory, the comparative advantages of a location serves firms not only by exploiting their ownership advantages (mostly as far as market seeking, resource seeking and efficiency seeking investments are concerned) but also by augmenting and/or creating new ownership advantages (strategic asset seeking investments). The impact of a location on the competitiveness of firms is considered from both sides: the MNE’s home country and the MNE’s host country.

Keywords: competitiveness, economic geography, international business, attractiveness of businesses

Procedia PDF Downloads 116
250 Polish Adversarial Trial: Analysing the Fairness of New Model of Appeal Proceedings in the Context of Delivered Research

Authors: Cezary Kulesza, Katarzyna Lapinska

Abstract:

Regarding the nature of the notion of fair trial, one must see the source of the fair trial principle in the following acts of international law: art. 6 of the ECHR of 1950 and art.14 the International Covenant on Civil and Political Rights of 1966, as well as in art. 45 of the Polish Constitution. However, the problem is that the above-mentioned acts essentially apply the principle of a fair trial to the main hearing and not to appeal proceedings. Therefore, the main thesis of the work is to answer the question whether the Polish model of appeal proceedings is fair. The paper presents the problem of fair appeal proceedings in Poland in comparative perspective. Thus, the authors discuss the basic features of English, German and Russian appeal systems. The matter is also analysed in the context of the last reforms of Polish criminal procedure, because since 2013 Polish parliament has significantly changed criminal procedure almost three times: by the Act of 27th September, 2013, the Act of 20th February, 2015 which came into effect on 1st July, 2015 and the Act of 11th March, 2016. The most astonishing is that these three amendments have been varying from each other – changing Polish criminal procedure to more adversarial one and then rejecting all measures just involved in previous acts. Additional intent of the Polish legislator was amending the forms of plea bargaining: conviction of the defendant without trial or voluntary submission to a penalty, which were supposed to become tools allowing accelerating the criminal process and, at the same time, implementing the principle of speedy procedure. The next part of the paper will discuss the matter, how the changes of plea bargaining and the main trial influenced the appellate procedure in Poland. The authors deal with the right to appeal against judgments issued in negotiated case-ending settlements in the light of Art. 2 of Protocol No. 7 to the ECHR and the Polish Constitution. The last part of the presentation will focus on the basic changes in the appeals against judgments issued after the main trial. This part of the paper also presents the results of examination of court files held in the Polish Appeal Courts in Białystok, Łódź and Warsaw. From these considerations it is concluded that the Polish CCP of 1997 in ordinary proceedings basically meets both standards: the standard adopted in Protocol No. 7 of the Convention and the Polish constitutional standard. But the examination of case files shows in particular the following phenomena: low effectiveness of appeals and growing stability of the challenged judgments of district courts, extensive duration of appeal proceedings and narrow scope of evidence proceedings before the appellate courts. On the other hand, limitations of the right to appeal against the judgments issued in consensual modes of criminal proceedings justify the fear that such final judgments may violate the principle of criminal accurate response or the principle of material truth.

Keywords: adversarial trial, appeal, ECHR, England, evidence, fair trial, Germany, Polish criminal procedure, reform, Russia

Procedia PDF Downloads 122
249 The Influence of Microsilica on the Cluster Cracks' Geometry of Cement Paste

Authors: Maciej Szeląg

Abstract:

The changing nature of environmental impacts, in which cement composites are operating, are causing in the structure of the material a number of phenomena, which result in volume deformation of the composite. These strains can cause composite cracking. Cracks are merging by propagation or intersect to form a characteristic structure of cracks known as the cluster cracks. This characteristic mesh of cracks is crucial to almost all building materials, which are working in service loads conditions. Particularly dangerous for a cement matrix is a sudden load of elevated temperature – the thermal shock. Resulting in a relatively short period of time a large value of a temperature gradient between the outer surface and the material’s interior can result in cracks formation on the surface and in the volume of the material. In the paper, in order to analyze the geometry of the cluster cracks of the cement pastes, the image analysis tools were used. Tested were 4 series of specimens made of two different Portland cement. In addition, two series include microsilica as a substitute for the 10% of the cement. Within each series, specimens were performed in three w/b indicators (water/binder): 0.4; 0.5; 0.6. The cluster cracks were created by sudden loading the samples by elevated temperature of 250°C. Images of the cracked surfaces were obtained via scanning at 2400 DPI. Digital processing and measurements were performed using ImageJ v. 1.46r software. To describe the structure of the cluster cracks three stereological parameters were proposed: the average cluster area - A ̅, the average length of cluster perimeter - L ̅, and the average opening width of a crack between clusters - I ̅. The aim of the study was to identify and evaluate the relationships between measured stereological parameters, and the compressive strength and the bulk density of the modified cement pastes. The tests of the mechanical and physical feature have been carried out in accordance with EN standards. The curves describing the relationships have been developed using the least squares method, and the quality of the curve fitting to the empirical data was evaluated using three diagnostic statistics: the coefficient of determination – R2, the standard error of estimation - Se, and the coefficient of random variation – W. The use of image analysis allowed for a quantitative description of the cluster cracks’ geometry. Based on the obtained results, it was found a strong correlation between the A ̅ and L ̅ – reflecting the fractal nature of the cluster cracks formation process. It was noted that the compressive strength and the bulk density of cement pastes decrease with an increase in the values of the stereological parameters. It was also found that the main factors, which impact on the cluster cracks’ geometry are the cement particles’ size and the general content of the binder in a volume of the material. The microsilica caused the reduction in the A ̅, L ̅ and I ̅ values compared to the values obtained by the classical cement paste’s samples, which is caused by the pozzolanic properties of the microsilica.

Keywords: cement paste, cluster cracks, elevated temperature, image analysis, microsilica, stereological parameters

Procedia PDF Downloads 227
248 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 297
247 Experimental Study of Infill Walls with Joint Reinforcement Subjected to In-Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frames subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. Confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame, and the aspect ratio of the wall. All cases included tie columns and tie beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frames with identical characteristics to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in a cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls. This type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking, and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is a property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, infill wall, infilled frame, masonry wall

Procedia PDF Downloads 153