Search results for: KUD (Rural Unit Cooperative)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4196

Search results for: KUD (Rural Unit Cooperative)

116 Design and Implementation of an Affordable Electronic Medical Records in a Rural Healthcare Setting: A Qualitative Intrinsic Phenomenon Case Study

Authors: Nitika Sharma, Yogesh Jain

Abstract:

Introduction: An efficient Information System helps in improving the service delivery as well provides the foundation for policy and regulation of other building blocks of Health System. Health care organizations require an integrated working of its various sub-systems. An efficient EMR software boosts the teamwork amongst the various sub-systems thereby resulting in improved service delivery. Although there has been a huge impetus to EMR under the Digital India initiative, it has still not been mandated in India. It is generally implemented in huge funded public or private healthcare organizations only. Objective: The study was conducted to understand the factors that lead to the successful adoption of an affordable EMR in the low level healthcare organization. It intended to understand the design of the EMR and address the solutions to the challenges faced in adoption of the EMR. Methodology: The study was conducted in a non-profit registered Healthcare organization that has been providing healthcare facilities to more than 2500 villages including certain areas that are difficult to access. The data was collected with help of field notes, in-depth interviews and participant observation. A total of 16 participants using the EMR from different departments were enrolled via purposive sampling technique. The participants included in the study were working in the organization before the implementation of the EMR system. The study was conducted in one month period from 25 June-20 July 2018. The Ethical approval was taken from the institute along with prior approval of the participants. Data analysis: A word document of more than 4000 words was obtained after transcribing and translating the answers of respondents. It was further analyzed by focused coding, a line by line review of the transcripts, underlining words, phrases or sentences that might suggest themes to do thematic narrative analysis. Results: Based on the answers the results were thematically grouped under four headings: 1. governance of organization, 2. architecture and design of the software, 3. features of the software, 4. challenges faced in adoption and the solutions to address them. It was inferred that the successful implementation was attributed to the easy and comprehensive design of the system which has facilitated not only easy data storage and retrieval but contributes in constructing a decision support system for the staff. Portability has lead to increased acceptance by physicians. The proper division of labor, increased efficiency of staff, incorporation of auto-correction features and facilitation of task shifting has lead to increased acceptance amongst the users of various departments. Geographical inhibitions, low computer literacy and high patient load were the major challenges faced during its implementation. Despite of dual efforts made both by the architects and administrators to combat these challenges, there are still certain ongoing challenges faced by organization. Conclusion: Whenever any new technology is adopted there are certain innovators, early adopters, late adopters and laggards. The same pattern was followed in adoption of this software. He challenges were overcome with joint efforts of organization administrators and users as well. Thereby this case study provides a framework of implementing similar systems in public sector of countries that are struggling for digitizing the healthcare in presence of crunch of human and financial resources.

Keywords: EMR, healthcare technology, e-health, EHR

Procedia PDF Downloads 87
115 Unpacking the Rise of Social Entrepreneurship over Sustainable Entrepreneurship among Sri Lankan Exporters in SMEs Sector: A Case Study in Sri Lanka

Authors: Amarasinghe Shashikala, Pramudika Hansini, Fernando Tajan, Rathnayake Piyumi

Abstract:

This study investigates the prominence of the social entrepreneurship (SE) model over the sustainable entrepreneurship model among Sri Lankan exporters in the small and medium enterprise (SME) sector. The primary objective of this study is to explore how the unique socio-economic contextual nuances of the country influence this behavior. The study employs a multiple-case study approach, collecting data from thirteen SEs in the SME sector. The findings reveal a significant alignment between SE and the lifestyle of the people in Sri Lanka, attributed largely to its deep-rooted religious setting and cultural norms. A crucial factor driving the prominence of SE is the predominantly labor-intensive nature of production processes within the exporters of the SME sector. These processes inherently lend themselves to SE, providing employment opportunities and fostering community engagement. Further, SE initiatives substantially resonate with community-centric practices, making them more appealing and accessible to the local populace. In contrast, the findings highlight a dilemma between cost-effectiveness and sustainable entrepreneurship. Transitioning to sustainable export products and production processes is demanded by foreign buyers and acknowledged as essential for environmental stewardship, which often requires capital-intensive makeovers. This investment inevitably raises the overall cost of the export product, making it less competitive in the global market. Interestingly, the study notes a disparity between international demand for sustainable products and the willingness of buyers to pay a premium for them. Despite the growing global preference for eco-friendly options, the findings suggest that the additional costs associated with sustainable entrepreneurship are not adequately reflected in the purchasing behavior of international buyers. The abundance of natural resources coupled with a minimal occurrence of natural catastrophes renders exporters less environmentally sensitive. The absence of robust policy support for environmental preservation exacerbates this inclination. Consequently, exporters exhibit a diminished motivation to incorporate environmental sustainability into their business decisions. Instead, attention is redirected towards factors such as the local population's minimum standards of living, prevalent social issues, governmental corruption and inefficiency, and rural poverty. These elements impel exporters to prioritize social well-being when making business decisions. Notably, the emphasis on social impact, rather than environmental impact, appears to be a generational trend, perpetuating a focus on societal aspects in the realm of business. In conclusion, the manifestation of entrepreneurial behavior within developing nations is notably contingent upon contextual nuances. This investigation contributes to a deeper understanding of the dynamics shaping the prevalence of SE over sustainable entrepreneurship among Sri Lankan exporters in the SME sector. The insights generated have implications for policymakers, industry stakeholders, and academics seeking to navigate the delicate balance between socio-cultural values, economic feasibility, and environmental sustainability in the pursuit of responsible business practices within the export sector.

Keywords: small and medium enterprises, social entrepreneurship, Sri Lanka, sustainable entrepreneurship

Procedia PDF Downloads 51
114 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta

Abstract:

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Keywords: bioeconomy, lipids, microalgae, proteins, saccharides

Procedia PDF Downloads 230
113 Empirical Modeling and Spatial Analysis of Heat-Related Morbidity in Maricopa County, Arizona

Authors: Chuyuan Wang, Nayan Khare, Lily Villa, Patricia Solis, Elizabeth A. Wentz

Abstract:

Maricopa County, Arizona, has a semi-arid hot desert climate that is one of the hottest regions in the United States. The exacerbated urban heat island (UHI) effect caused by rapid urbanization has made the urban area even hotter than the rural surroundings. The Phoenix metropolitan area experiences extremely high temperatures in the summer from June to September that can reach the daily highest of 120 °F (48.9 °C). Morbidity and mortality due to the environmental heat is, therefore, a significant public health issue in Maricopa County, especially because it is largely preventable. Public records from the Maricopa County Department of Public Health (MCDPH) revealed that between 2012 and 2016, there were 10,825 incidents of heat-related morbidity incidents, 267 outdoor environmental heat deaths, and 173 indoor heat-related deaths. A lot of research has examined heat-related death and its contributing factors around the world, but little has been done regarding heat-related morbidity issues, especially for regions that are naturally hot in the summer. The objective of this study is to examine the demographic, socio-economic, housing, and environmental factors that contribute to heat-related morbidity in Maricopa County. We obtained heat-related morbidity data between 2012 and 2016 at census tract level from MCDPH. Demographic, socio-economic, and housing variables were derived using 2012-2016 American Community Survey 5-year estimate from the U.S. Census. Remotely sensed Landsat 7 ETM+ and Landsat 8 OLI satellite images and Level-1 products were acquired for all the summer months (June to September) from 2012 and 2016. The National Land Cover Database (NLCD) 2016 percent tree canopy and percent developed imperviousness data were obtained from the U.S. Geological Survey (USGS). We used ordinary least squares (OLS) regression analysis to examine the empirical relationship between all the independent variables and heat-related morbidity rate. Results showed that higher morbidity rates are found in census tracts with higher values in population aged 65 and older, population under poverty, disability, no vehicle ownership, white non-Hispanic, population with less than high school degree, land surface temperature, and surface reflectance, but lower values in normalized difference vegetation index (NDVI) and housing occupancy. The regression model can be used to explain up to 59.4% of total variation of heat-related morbidity in Maricopa County. The multiscale geographically weighted regression (MGWR) technique was then used to examine the spatially varying relationships between heat-related morbidity rate and all the significant independent variables. The R-squared value of the MGWR model increased to 0.691, that shows a significant improvement in goodness-of-fit than the global OLS model, which means that spatial heterogeneity of some independent variables is another important factor that influences the relationship with heat-related morbidity in Maricopa County. Among these variables, population aged 65 and older, the Hispanic population, disability, vehicle ownership, and housing occupancy have much stronger local effects than other variables.

Keywords: census, empirical modeling, heat-related morbidity, spatial analysis

Procedia PDF Downloads 108
112 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 375
111 Molecular Detection and Antibiotics Resistance Pattern of Extended-Spectrum Beta-Lactamase Producing Escherichia coli in a Tertiary Hospital in Enugu, Nigeria

Authors: I. N. Nwafia, U. C. Ozumba, M. E. Ohanu, S. O. Ebede

Abstract:

Antibiotic resistance is increasing globally and has become a major health challenge. Extended-spectrum beta-lactamase is clinically important because the ESBL gene are mostly plasmid encoded and these plasmids frequently carry genes encoding resistance to other classes of antimicrobials thereby limiting antibiotic options in the treatment of infections caused by these organisms. The specific objectives of this study were to determine the prevalence of ESBLs production in Escherichia coli, to determine the antibiotic susceptibility pattern of ESBLs producing Escherichia coli, to detect TEM, SHV and CTX-M genes and the risk factors to acquisition of ESBL producing Escherichia coli. The protocol of the study was approved by Health Research and Ethics committee of the University of Nigeria Teaching Hospital (UNTH), Enugu. It was a descriptive cross-sectional study that involved all hospitalized patients in UNTH from whose specimens Escherichia coli was isolated during the period of the study. The samples analysed were urine, wound swabs, blood and cerebrospinal fluid. These samples were cultured in 5% sheep Blood agar and MacConkey agar (Oxoid Laboratories, Cambridge UK) and incubated at 35-370C for 24 hours. Escherichia coli was identified with standard biochemical tests and confirmed using API 20E auxanogram (bioMerieux, Marcy 1'Etoile, France). The antibiotic susceptibility testing was done by disc diffusion method and interpreted according to the Clinical and Laboratory Standard Institute guideline. ESBL production was confirmed using ESBL Epsilometer test strips (Liofilchem srl, Italy). The ESBL bla genes were detected with polymerase chain reaction, after extraction of DNA with plasmid mini-prep kit (Jena Bioscience, Jena, Germany). Data analysis was with appropriate descriptive and inferential statistics. One hundred and six isolates (53.00%) out of the 200 were from urine, followed by isolates from different swabs specimens 53(26.50%) and the least number of the isolates 4(2.00) were from blood (P value = 0.096). Seventy (35.00%) out of the 200 isolates, were confirmed positive for ESBL production. Forty-two (60.00%) of the isolates were from female patients while 28(40.00%) were from male patients (P value = 0.13). Sixty-eight (97.14%) of the isolates were susceptible to imipenem while all of the isolates were resistant to ampicillin, chloramphenicol and tetracycline. From the 70 positive isolates the ESBL genes detected with polymerase chain reaction were blaCTX-M (n=26; 37.14%), blaTEM (n=7; 10.00%), blaSHV (n=2; 2.86%), blaCTX-M/TEM (n=7; 10.0%), blaCTX-M/SHV (n=14; 20.0%) and blaCTX-M/TEM/SHV (n=10; 14.29%). There was no gene detected in 4(5.71%) of the isolates. The most associated risk factors to infections caused by ESBL producing Escherichia coli was previous antibiotics use for the past 3 months followed by admission in the intensive care unit, recent surgery, and urinary catheterization. In conclusion, ESBLs was detected in 4 of every 10 Escherichia coli with the predominant gene detected being CTX-M. This knowledge will enable appropriate measures towards improvement of patient health care, antibiotic stewardship, research and infection control in the hospital.

Keywords: antimicrobial, Escherichia coli, extended spectrum beta lactamase, resistance

Procedia PDF Downloads 276
110 Evaluation: Developing An Appropriate Survey Instrument For E-Learning

Authors: Brenda Ravenscroft, Ulemu Luhanga, Bev King

Abstract:

A comprehensive evaluation of online learning needs to include a blend of educational design, technology use, and online instructional practices that integrate technology appropriately for developing and delivering quality online courses. Research shows that classroom-based evaluation tools do not adequately capture the dynamic relationships between content, pedagogy, and technology in online courses. Furthermore, studies suggest that using classroom evaluations for online courses yields lower than normal scores for instructors, and may affect faculty negatively in terms of administrative decisions. In 2014, the Faculty of Arts and Science at Queen’s University responded to this evidence by seeking an alternative to the university-mandated evaluation tool, which is designed for classroom learning. The Faculty is deeply engaged in e-learning, offering large variety of online courses and programs in the sciences, social sciences, humanities and arts. This paper describes the process by which a new student survey instrument for online courses was developed and piloted, the methods used to analyze the data, and the ways in which the instrument was subsequently adapted based on the results. It concludes with a critical reflection on the challenges of evaluating e-learning. The Student Evaluation of Online Teaching Effectiveness (SEOTE), developed by Arthur W. Bangert in 2004 to assess constructivist-compatible online teaching practices, provided the starting point. Modifications were made in order to allow the instrument to serve the two functions required by the university: student survey results provide the instructor with feedback to enhance their teaching, and also provide the institution with evidence of teaching quality in personnel processes. Changes were therefore made to the SEOTE to distinguish more clearly between evaluation of the instructor’s teaching and evaluation of the course design, since, in the online environment, the instructor is not necessarily the course designer. After the first pilot phase, involving 35 courses, the results were analyzed using Stobart's validity framework as a guide. This process included statistical analyses of the data to test for reliability and validity, student and instructor focus groups to ascertain the tool’s usefulness in terms of the feedback it provided, and an assessment of the utility of the results by the Faculty’s e-learning unit responsible for supporting online course design. A set of recommendations led to further modifications to the survey instrument prior to a second pilot phase involving 19 courses. Following the second pilot, statistical analyses were repeated, and more focus groups were used, this time involving deans and other decision makers to determine the usefulness of the survey results in personnel processes. As a result of this inclusive process and robust analysis, the modified SEOTE instrument is currently being considered for adoption as the standard evaluation tool for all online courses at the university. Audience members at this presentation will be stimulated to consider factors that differentiate effective evaluation of online courses from classroom-based teaching. They will gain insight into strategies for introducing a new evaluation tool in a unionized institutional environment, and methodologies for evaluating the tool itself.

Keywords: evaluation, online courses, student survey, teaching effectiveness

Procedia PDF Downloads 250
109 Identifying Risk Factors for Readmission Using Decision Tree Analysis

Authors: Sıdıka Kaya, Gülay Sain Güven, Seda Karsavuran, Onur Toka

Abstract:

This study is part of an ongoing research project supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 114K404, and participation to this conference was supported by Hacettepe University Scientific Research Coordination Unit under Project Number 10243. Evaluation of hospital readmissions is gaining importance in terms of quality and cost, and is becoming the target of national policies. In Turkey, the topic of hospital readmission is relatively new on agenda and very few studies have been conducted on this topic. The aim of this study was to determine 30-day readmission rates and risk factors for readmission. Whether readmission was planned, related to the prior admission and avoidable or not was also assessed. The study was designed as a ‘prospective cohort study.’ 472 patients hospitalized in internal medicine departments of a university hospital in Turkey between February 1, 2015 and April 30, 2015 were followed up. Analyses were conducted using IBM SPSS Statistics version 22.0 and SPSS Modeler 16.0. Average age of the patients was 56 and 56% of the patients were female. Among these patients 95 were readmitted. Overall readmission rate was calculated as 20% (95/472). However, only 31 readmissions were unplanned. Unplanned readmission rate was 6.5% (31/472). Out of 31 unplanned readmission, 24 was related to the prior admission. Only 6 related readmission was avoidable. To determine risk factors for readmission we constructed Chi-square automatic interaction detector (CHAID) decision tree algorithm. CHAID decision trees are nonparametric procedures that make no assumptions of the underlying data. This algorithm determines how independent variables best combine to predict a binary outcome based on ‘if-then’ logic by portioning each independent variable into mutually exclusive subsets based on homogeneity of the data. Independent variables we included in the analysis were: clinic of the department, occupied beds/total number of beds in the clinic at the time of discharge, age, gender, marital status, educational level, distance to residence (km), number of people living with the patient, any person to help his/her care at home after discharge (yes/no), regular source (physician) of care (yes/no), day of discharge, length of stay, ICU utilization (yes/no), total comorbidity score, means for each 3 dimensions of Readiness for Hospital Discharge Scale (patient’s personal status, patient’s knowledge, and patient’s coping ability) and number of daycare admissions within 30 days of discharge. In the analysis, we included all 95 readmitted patients (46.12%), but only 111 (53.88%) non-readmitted patients, although we had 377 non-readmitted patients, to balance data. The risk factors for readmission were found as total comorbidity score, gender, patient’s coping ability, and patient’s knowledge. The strongest identifying factor for readmission was comorbidity score. If patients’ comorbidity score was higher than 1, the risk for readmission increased. The results of this study needs to be validated by other data–sets with more patients. However, we believe that this study will guide further studies of readmission and CHAID is a useful tool for identifying risk factors for readmission.

Keywords: decision tree, hospital, internal medicine, readmission

Procedia PDF Downloads 239
108 The Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) Process: An Audit of Its Utilisation on a UK Tertiary Specialist Intensive Care Unit

Authors: Gokulan Vethanayakam, Daniel Aston

Abstract:

Introduction: The ReSPECT process supports healthcare professionals when making patient-centered decisions in the event of an emergency. It has been widely adopted by the NHS in England and allows patients to express thoughts and wishes about treatments and outcomes that they consider acceptable. It includes (but is not limited to) cardiopulmonary resuscitation decisions. ReSPECT conversations should ideally occur prior to ICU admission and should be documented in the eight sections of the nationally-standardised ReSPECT form. This audit evaluated the use of ReSPECT on a busy cardiothoracic ICU in an NHS Trust where established policies advocating its use exist. Methods: This audit was a retrospective review of ReSPECT forms for a sample of high-risk patients admitted to ICU at the Royal Papworth Hospital between January 2021 and March 2022. Patients all received one of the following interventions: Veno-Venous Extra-Corporeal Membrane Oxygenation (VV-ECMO) for severe respiratory failure (retrieved via the national ECMO service); cardiac or pulmonary transplantation-related surgical procedures (including organ transplants and Ventricular Assist Device (VAD) implantation); or elective non-transplant cardiac surgery. The quality of documentation on ReSPECT forms was evaluated using national standards and a graded ranking tool devised by the authors which was used to assess narrative aspects of the forms. Quality was ranked as A (excellent) to D (poor). Results: Of 230 patients (74 VV-ECMO, 104 transplant, 52 elective non-transplant surgery), 43 (18.7%) had a ReSPECT form and only one (0.43%) patient had a ReSPECT form completed prior to ICU admission. Of the 43 forms completed, 38 (88.4%) were completed due to the commencement of End of Life (EoL) care. No non-transplant surgical patients included in the audit had a ReSPECT form. There was documentation of balance of care (section 4a), CPR status (section 4c), capacity assessment (section 5), and patient involvement in completing the form (section 6a) on all 43 forms. Of the 34 patients assessed as lacking capacity to make decisions, only 22 (64.7%) had reasons documented. Other sections were variably completed; 29 (67.4%) forms had relevant background information included to a good standard (section 2a). Clinical guidance for the patient (section 4b) was given in 25 (58.1%), of which 11 stated the rationale that underpinned it. Seven forms (16.3%) contained information in an inappropriate section. In a comparison of ReSPECT forms completed ahead of an EoL trigger with those completed when EoL care began, there was a higher number of entries in section 3 (considering patient’s values/fears) that were assessed at grades A-B in the former group (p = 0.014), suggesting higher quality. Similarly, forms from the transplant group contained higher quality information in section 3 than those from the VV-ECMO group (p = 0.0005). Conclusions: Utilisation of the ReSPECT process in high-risk patients is yet to be well-adopted in this trust. Teams who meet patients before hospital admission for transplant or high-risk surgery should be encouraged to engage with the ReSPECT process at this point in the patient's journey. VV-ECMO retrieval teams should consider ReSPECT conversations with patients’ relatives at the time of retrieval.

Keywords: audit, critical care, end of life, ICU, ReSPECT, resuscitation

Procedia PDF Downloads 56
107 Multiple Plant-Based Cell Suspension as a Bio-Ink for 3D Bioprinting Applications in Food Technology

Authors: Yusuf Hesham Mohamed

Abstract:

Introduction: Three-dimensional printing technology includes multiple procedures that fabricate three-dimensional objects through consecutively layering two-dimensional cross-sections on top of each other. 3D bioprinting is a promising field of 3D printing, which fabricates tissues and organs by accurately controlling the proper arrangement of diverse biological components. 3D bioprinting uses software and prints biological materials and their supporting components layer-by-layer on a substrate or in a tissue culture plate to produce complex live tissues and organs. 3D food printing is an emerging field of 3D bioprinting in which the 3D printed products are food products that are cheap, require less effort to produce, and have more desirable traits. The Aim of the Study is the development of an affordable 3D bioprinter by altering a locally made CNC instrument with an open-source platform to suit the 3D bio-printer purposes. Later, we went through applying the prototype in several applications regarding food technology and drug testing, including the organ-On-Chip. Materials and Methods: An off-the-shelf 3D printer was modified by designing and fabricating the syringe unit, which was designed on the basis of the Milli-fluidics system. Sodium alginate and gelatin hydrogels were prepared, followed by leaf cell suspension preparation from narrow sections of Fragaria’s viable leaves. The desired 3D structure was modeled, and 3D printing preparations took place. Cell-free and cell-laden hydrogels were printed at room temperature under sterile conditions. Post printing curing process was performed. The printed structure was further studied. Results: Positive results have been achieved using the altered 3D bioprinter where a 3D hydrogel construct of two layers made of the combination of sodium alginate to gelatin (15%: 0.5%) has been printed. DLP 3D printer was used to design the syringe component with a transparent PLA-Pro resin for the creation of a microfluidics system having two channels altered to the double extruder. The hydrogel extruder’s design was based on peristaltic pumps, which utilized a stepper motor. The design and fabrication were made using DIY-3D printed parts. Hard plastic PLA was the material utilized for printing. SEM was used to carry out the porous 3D construct imaging. Multiple physical and chemical tests were performed in order to ensure that the cell line was suitable for hosting. Fragaria plant was developed by suspending Fragaria’s cells from its leaves using the 3D bioprinter. Conclusion: 3D bioprinting is considered to be an emerging scientific field that can facilitate and improve many scientific tests and studies. Thus, having a 3D bioprinter in labs is considered to be an essential requirement. 3D bioprinters are very expensive; however, the fabrication of a 3D printer into a 3D bioprinter can lower the cost of the bioprinter. The 3D bioprinter implemented made use of peristaltic pumps instead of syringe-based pumps in order to extend the ability to print multiple types of materials and cells.

Keywords: scaffold, eco on chip, 3D bioprinter, DLP printer

Procedia PDF Downloads 103
106 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 123
105 Case Report on Anaesthesia for Ruptured Ectopic with Severe Pulmonary Hypertension in a Mute Patient

Authors: Pamela Chia, Tay Yoong Chuan

Abstract:

Introduction: Severe pulmonary hypertension (PH) patients requiring non-cardiac surgery risk have increased mortality rates ranging. These patients are plagued with cardiorespiratory failure, dysrhythmias and anticoagulation potentially with concurrent sepsis and renal insufficiency, perioperative morbidity. We present a deaf-mute patient with severe idiopathic PH emergently prepared for ruptured ectopic laparotomy. Case Report: A 20 year-old female, 62kg (BMI 25 kg/m2) with severe idiopathic PH (2DE Ejection Fraction was 41%, Pulmonary Artery Systolic Pressure (PASP) 105 mmHg, Right ventricle strain and hypertrophy) and selective mutism was rushed in for emergency laparotomy after presenting to the emergency department for abdominal pain. The patient had an NYHA Class II with room air SpO2 93-95%. While awaiting lung transplant, the patient takes warfarin, Sildanefil, Macitentan and even Selexipag for rising PASP. At presentation, vital signs: BP 95/63, HR 119 SpO2 88% (room air). Despite decreasing haemoglobin 14 to 10g/dL, INR 2.59 was reversed with prothrombin concentrate, and Vitamin K. ECG revealed Right Bundle Branch Block with right ventricular strain and x-ray showed cardiomegaly, dilated Right Ventricle, Pulmonary Arteries, basal atelectasis. Arterial blood gas showed compensated metabolic acidosis pH 7.4 pCO2 32 pO2 53 HCO3 20 BE -4 SaO2 88%. The cardiothoracic surgeon concluded no role for Extracorporeal Membrane Oxygenation (ECMO). We inserted invasive arterial and central venous lines with blood transfusion via an 18G cannula before the patient underwent a midline laparotomy, haemostasis of ruptured ovarian cyst with 2.4L of clots under general anesthesia and FloTrac cardiac output monitoring. Rapid sequence induction was done with Midazolam/Propofol, remifentanil infusion, and rocuronium. The patient was maintained on Desflurane. Blood products and colloids were transfused for further 1.5L blood loss. Postoperatively, the patient was transferred to the intensive care unit and was extubated uneventfully 7hours later. The patient went home a week later. Discussion: Emergency hemostasis laparotomy in anticoagulated WHO Class I PH patient awaiting lung transplant with no ECMO backup poses tremendous stress on the deaf-mute patient and the anesthesiologist. Balancing hemodynamics avoiding hypotension while awaiting hemostasis in the presence of pulmonary arterial dilators and anticoagulation requires close titration of volatiles, which decreases RV contractility. We review the contraindicated anesthetic agents (ketamine, N2O), choice of vasopressors in hypotension to maintain Aortic-right ventricular pressure gradients and nitric oxide use perioperatively. Conclusion: Interdisciplinary communication with a deaf-mute moribund patient and anesthesia considerations pose many rare challenges worth sharing.

Keywords: pulmonary hypertension, case report, warfarin reversal, emergency surgery

Procedia PDF Downloads 194
104 Performance of CALPUFF Dispersion Model for Investigation the Dispersion of the Pollutants Emitted from an Industrial Complex, Daura Refinery, to an Urban Area in Baghdad

Authors: Ramiz M. Shubbar, Dong In Lee, Hatem A. Gzar, Arthur S. Rood

Abstract:

Air pollution is one of the biggest environmental problems in Baghdad, Iraq. The Daura refinery located nearest the center of Baghdad, represents the largest industrial area, which transmits enormous amounts of pollutants, therefore study the gaseous pollutants and particulate matter are very important to the environment and the health of the workers in refinery and the people whom leaving in areas around the refinery. Actually, some studies investigated the studied area before, but it depended on the basic Gaussian equation in a simple computer programs, however, that kind of work at that time is very useful and important, but during the last two decades new largest production units were added to the Daura refinery such as, PU_3 (Power unit_3 (Boiler 11&12)), CDU_1 (Crude Distillation unit_70000 barrel_1), and CDU_2 (Crude Distillation unit_70000 barrel_2). Therefore, it is necessary to use new advanced model to study air pollution at the region for the new current years, and calculation the monthly emission rate of pollutants through actual amounts of fuel which consumed in production unit, this may be lead to accurate concentration values of pollutants and the behavior of dispersion or transport in study area. In this study to the best of author’s knowledge CALPUFF model was used and examined for first time in Iraq. CALPUFF is an advanced non-steady-state meteorological and air quality modeling system, was applied to investigate the pollutants concentration of SO2, NO2, CO, and PM1-10μm, at areas adjacent to Daura refinery which located in the center of Baghdad in Iraq. The CALPUFF modeling system includes three main components: CALMET is a diagnostic 3-dimensional meteorological model, CALPUFF (an air quality dispersion model), CALPOST is a post processing package, and an extensive set of preprocessing programs produced to interface the model to standard routinely available meteorological and geophysical datasets. The targets of this work are modeling and simulation the four pollutants (SO2, NO2, CO, and PM1-10μm) which emitted from Daura refinery within one year. Emission rates of these pollutants were calculated for twelve units includes thirty plants, and 35 stacks by using monthly average of the fuel amount consumption at this production units. Assess the performance of CALPUFF model in this study and detect if it is appropriate and get out predictions of good accuracy compared with available pollutants observation. CALPUFF model was investigated at three stability classes (stable, neutral, and unstable) to indicate the dispersion of the pollutants within deferent meteorological conditions. The simulation of the CALPUFF model showed the deferent kind of dispersion of these pollutants in this region depends on the stability conditions and the environment of the study area, monthly, and annual averages of pollutants were applied to view the dispersion of pollutants in the contour maps. High values of pollutants were noticed in this area, therefore this study recommends to more investigate and analyze of the pollutants, reducing the emission rate of pollutants by using modern techniques and natural gas, increasing the stack height of units, and increasing the exit gas velocity from stacks.

Keywords: CALPUFF, daura refinery, Iraq, pollutants

Procedia PDF Downloads 179
103 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 458
102 Problem, Policy and Polity in Agenda Setting: Analyzing Safe Motherhood Program in India

Authors: Vanita Singh

Abstract:

In developing countries, there are conflicting political agendas; policy makers have to prioritize issues from a list of issues competing for the limited resources. Thus, it is imperative to understand how some issues gain attention, and others lose in the policy circles. Multiple-Streams Theory of Kingdon (1984) is among the influential theories that help to understand the public policy process and is utilitarian for health policy makers to understand how certain health issues emerge on the policy agendas. The issue of maternal mortality was long standing in India and was linked with high birth rate thus the focus of maternal health policy was on family planning since India’s independence. However, a paradigm shift was noted in the maternal health policy in the year 1992 with the launch of Safe Motherhood Programme and then in the year 2005, when the agenda of maternal health policy became universalizing institutional deliveries and phasing-out of Traditional Birth Attendants (TBAs) from the health system. There were many solutions proposed by policy communities other than universalizing of institutional deliveries, including training of TBAs and improving socio-economic conditions of pregnant women. However, Government of India favored medical community, which was advocating for the policy of universalizing institutional delivery, and neglected the solutions proposed by other policy communities. It took almost 15 years for the advocates of institutional delivery to transform their proposed solution into a program - the Janani Suraksha Yojana (JSY), a safe-motherhood program promoting institutional delivery through cash incentives to pregnant women. Thus, the case of safe motherhood policy in India is worth studying to understand how certain issues/problems gain political attention and how advocacy work in policy circles. This paper attempts to understand the factors that favored the agenda of safe-motherhood in the policy circle in India, using John Kingdon’s Multiple-Stream model of agenda-setting. Through document analysis and literature review, the paper traces the evolution of safe motherhood program and maternal health policy. The study has used open source documents available on the website of Ministry of Health and Family Welfare, media reports (Times of India Archive) and related research papers. The documents analyzed include National health policy-1983, National Health Policy-2002, written reports of Ministry of Health and Family Welfare Department, National Rural Health Mission (NRHM) document, documents related to Janani Suraksha Yojana and research articles related to maternal health programme in India. The study finds that focusing events and credible indicators coupled with media attention has the potential to recognize a problem. The political elites favor clearly defined and well-accepted solutions. The trans-national organizations affect the agenda-setting process in a country through conditional resource provision. The closely-knit policy communities and political entrepreneurship are required for advocating solutions high on agendas. The study has implications for health policy makers in identifying factors that have the potential to affect the agenda-setting process for a desired policy agenda and identify the challenges in generating political priorities.

Keywords: agenda-setting, focusing events, Kingdon’s model, safe motherhood program India

Procedia PDF Downloads 123
101 Self-Medication with Antibiotics, Evidence of Factors Influencing the Practice in Low and Middle-Income Countries: A Systematic Scoping Review

Authors: Neusa Fernanda Torres, Buyisile Chibi, Lyn E. Middleton, Vernon P. Solomon, Tivani P. Mashamba-Thompson

Abstract:

Background: Self-medication with antibiotics (SMA) is a global concern, with a higher incidence in low and middle-income countries (LMICs). Despite intense world-wide efforts to control and promote the rational use of antibiotics, continuing practices of SMA systematically exposes individuals and communities to the risk of antibiotic resistance and other undesirable antibiotic side effects. Moreover, it increases the health systems costs of acquiring more powerful antibiotics to treat the resistant infection. This review thus maps evidence on the factors influencing self-medication with antibiotics in these settings. Methods: The search strategy for this review involved electronic databases including PubMed, Web of Knowledge, Science Direct, EBSCOhost (PubMed, CINAHL with Full Text, Health Source - Consumer Edition, MEDLINE), Google Scholar, BioMed Central and World Health Organization library, using the search terms:’ Self-Medication’, ‘antibiotics’, ‘factors’ and ‘reasons’. Our search included studies published from 2007 to 2017. Thematic analysis was performed to identify the patterns of evidence on SMA in LMICs. The mixed method quality appraisal tool (MMAT) version 2011 was employed to assess the quality of the included primary studies. Results: Fifteen studies met the inclusion criteria. Studies included population from the rural (46,4%), urban (33,6%) and combined (20%) settings, of the following LMICs: Guatemala (2 studies), India (2), Indonesia (2), Kenya (1), Laos (1), Nepal (1), Nigeria (2), Pakistan (2), Sri Lanka (1), and Yemen (1). The total sample size of all 15 included studies was 7676 participants. The findings of the review show a high prevalence of SMA ranging from 8,1% to 93%. Accessibility, affordability, conditions of health facilities (long waiting, quality of services and workers) as long well as poor health-seeking behavior and lack of information are factors that influence SMA in LMICs. Antibiotics such as amoxicillin, metronidazole, amoxicillin/clavulanic, ampicillin, ciprofloxacin, azithromycin, penicillin, and tetracycline, were the most frequently used for SMA. The major sources of antibiotics included pharmacies, drug stores, leftover drugs, family/friends and old prescription. Sore throat, common cold, cough with mucus, headache, toothache, flu-like symptoms, pain relief, fever, running nose, toothache, upper respiratory tract infections, urinary symptoms, urinary tract infection were the common disease symptoms managed with SMA. Conclusion: Although the information on factors influencing SMA in LMICs is unevenly distributed, the available information revealed the existence of research evidence on antibiotic self-medication in some countries of LMICs. SMA practices are influenced by social-cultural determinants of health and frequently associated with poor dispensing and prescribing practices, deficient health-seeking behavior and consequently with inappropriate drug use. Therefore, there is still a need to conduct further studies (qualitative, quantitative and randomized control trial) on factors and reasons for SMA to correctly address the public health problem in LMICs.

Keywords: antibiotics, factors, reasons, self-medication, low and middle-income countries (LMICs)

Procedia PDF Downloads 201
100 Large Scale Method to Assess the Seismic Vulnerability of Heritage Buidings: Modal Updating of Numerical Models and Vulnerability Curves

Authors: Claire Limoge Schraen, Philippe Gueguen, Cedric Giry, Cedric Desprez, Frédéric Ragueneau

Abstract:

Mediterranean area is characterized by numerous monumental or vernacular masonry structures illustrating old ways of build and live. Those precious buildings are often poorly documented, present complex shapes and loadings, and are protected by the States, leading to legal constraints. This area also presents a moderate to high seismic activity. Even moderate earthquakes can be magnified by local site effects and cause collapse or significant damage. Moreover the structural resistance of masonry buildings, especially when less famous or located in rural zones has been generally lowered by many factors: poor maintenance, unsuitable restoration, ambient pollution, previous earthquakes. Recent earthquakes prove that any damage to these architectural witnesses to our past is irreversible, leading to the necessity of acting preventively. This means providing preventive assessments for hundreds of structures with no or few documents. In this context we want to propose a general method, based on hierarchized numerical models, to provide preliminary structural diagnoses at a regional scale, indicating whether more precise investigations and models are necessary for each building. To this aim, we adapt different tools, being developed such as photogrammetry or to be created such as a preprocessor starting from pictures to build meshes for a FEM software, in order to allow dynamic studies of the buildings of the panel. We made an inventory of 198 baroque chapels and churches situated in the French Alps. Then their structural characteristics have been determined thanks field surveys and the MicMac photogrammetric software. Using structural criteria, we determined eight types of churches and seven types for chapels. We studied their dynamical behavior thanks to CAST3M, using EC8 spectrum and accelerogramms of the studied zone. This allowed us quantifying the effect of the needed simplifications in the most sensitive zones and choosing the most effective ones. We also proposed threshold criteria based on the observed damages visible in the in situ surveys, old pictures and Italian code. They are relevant in linear models. To validate the structural types, we made a vibratory measures campaign using vibratory ambient noise and velocimeters. It also allowed us validating this method on old masonry and identifying the modal characteristics of 20 churches. Then we proceeded to a dynamic identification between numerical and experimental modes. So we updated the linear models thanks to material and geometrical parameters, often unknown because of the complexity of the structures and materials. The numerically optimized values have been verified thanks to the measures we made on the masonry components in situ and in laboratory. We are now working on non-linear models redistributing the strains. So we validate the damage threshold criteria which we use to compute the vulnerability curves of each defined structural type. Our actual results show a good correlation between experimental and numerical data, validating the final modeling simplifications and the global method. We now plan to use non-linear analysis in the critical zones in order to test reinforcement solutions.

Keywords: heritage structures, masonry numerical modeling, seismic vulnerability assessment, vibratory measure

Procedia PDF Downloads 476
99 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems

Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra

Abstract:

Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.

Keywords: automated, biomechanics, team-sports, sprint

Procedia PDF Downloads 107
98 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube

Authors: Dan Kanmegne

Abstract:

Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.

Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification

Procedia PDF Downloads 127
97 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation

Authors: Fatma Refaat Ahmed, PhD, RN. Assistant Professor, Department of Nursing, College of Health Sciences, University of Sharjah, UAE. ([email protected]). Sally Mohamed Farghaly, Nursing Administration Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt. ([email protected])

Abstract:

Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.

Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities

Procedia PDF Downloads 50
96 Tunable Graphene Metasurface Modeling Using the Method of Moment Combined with Generalised Equivalent Circuit

Authors: Imen Soltani, Takoua Soltani, Taoufik Aguili

Abstract:

Metamaterials crossover classic physical boundaries and gives rise to new phenomena and applications in the domain of beam steering and shaping. Where electromagnetic near and far field manipulations were achieved in an accurate manner. In this sense, 3D imaging is one of the beneficiaries and in particular Denis Gabor’s invention: holography. But, the major difficulty here is the lack of a suitable recording medium. So some enhancements were essential, where the 2D version of bulk metamaterials have been introduced the so-called metasurface. This new class of interfaces simplifies the problem of recording medium with the capability of tuning the phase, amplitude, and polarization at a given frequency. In order to achieve an intelligible wavefront control, the electromagnetic properties of the metasurface should be optimized by means of solving Maxwell’s equations. In this context, integral methods are emerging as an important method to study electromagnetic from microwave to optical frequencies. The method of moment presents an accurate solution to reduce the problem of dimensions by writing its boundary conditions in the form of integral equations. But solving this kind of equations tends to be more complicated and time-consuming as the structural complexity increases. Here, the use of equivalent circuit’s method exhibits the most scalable experience to develop an integral method formulation. In fact, for allaying the resolution of Maxwell’s equations, the method of Generalised Equivalent Circuit was proposed to convey the resolution from the domain of integral equations to the domain of equivalent circuits. In point of fact, this technique consists in creating an electric image of the studied structure using discontinuity plan paradigm and taken into account its environment. So that, the electromagnetic state of the discontinuity plan is described by generalised test functions which are modelled by virtual sources not storing energy. The environmental effects are included by the use of an impedance or admittance operator. Here, we propose a tunable metasurface composed of graphene-based elements which combine the advantages of reflectarrays concept and graphene as a pillar constituent element at Terahertz frequencies. The metasurface’s building block consists of a thin gold film, a dielectric spacer SiO₂ and graphene patch antenna. Our electromagnetic analysis is based on the method of moment combined with generalised equivalent circuit (MoM-GEC). We begin by restricting our attention to study the effects of varying graphene’s chemical potential on the unit cell input impedance. So, it was found that the variation of complex conductivity of graphene allows controlling the phase and amplitude of the reflection coefficient at each element of the array. From the results obtained here, we were able to determine that the phase modulation is realized by adjusting graphene’s complex conductivity. This modulation is a viable solution compared to tunning the phase by varying the antenna length because it offers a full 2π reflection phase control.

Keywords: graphene, method of moment combined with generalised equivalent circuit, reconfigurable metasurface, reflectarray, terahertz domain

Procedia PDF Downloads 164
95 Combustion Variability and Uniqueness in Cylinders of a Radial Aircraft Piston Engine

Authors: Michal Geca, Grzegorz Baranski, Ksenia Siadkowska

Abstract:

The work is a part of the project which aims at developing innovative power and control systems for the high power aircraft piston engine ASz62IR. Developed electronically controlled ignition system will reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. The tested unit is an air-cooled four-stroke gasoline engine of 9 cylinders in a radial setup, mechanically charged by a radial compressor powered by the engine crankshaft. The total engine cubic capac-ity is 29.87 dm3, and the compression ratio is 6.4:1. The maximum take-off power is 1000 HP at 2200 rpm. The maximum fuel consumption is 280 kg/h. Engine powers aircrafts: An-2, M-18 „Dromader”, DHC-3 „OTTER”, DC-3 „Dakota”, GAF-125 „HAWK” i Y5. The main problems of the engine includes the imbalanced work of cylinders. The non-uniformity value in each cylinder results in non-uniformity of their work. In radial engine cylinders arrangement causes that the mixture movement that takes place in accordance (lower cylinder) or the opposite (upper cylinders) to the direction of gravity. Preliminary tests confirmed the presence of uneven workflow of individual cylinders. The phenomenon is most intense at low speed. The non-uniformity is visible on the waveform of cylinder pressure. Therefore two studies were conducted to determine the impact of this phenomenon on the engine performance: simulation and real tests. Simplified simulation was conducted on the element of the intake system coated with fuel film. The study shows that there is an effect of gravity on the movement of the fuel film inside the radial engine intake channels. Both in the lower and the upper inlet channels the film flows downwards. It follows from the fact that gravity assists the movement of the film in the lower cylinder channels and prevents the movement in the upper cylinder channels. Real tests on aircraft engine ASz62IR was conducted in transients condition (rapid change of the excess air in each cylinder were performed. Calculations were conducted for mass of fuel reaching the cylinders theoretically and really and on this basis, the factors of fuel evaporation “x” were determined. Therefore a simplified model of the fuel supply to cylinder was adopted. Model includes time constant of the fuel film τ, the number of engine transport cycles of non-evaporating fuel along the intake pipe γ and time between next cycles Δt. The calculation results of identification of the model parameters are presented in the form of radar graphs. The figures shows the averages declines and increases of the injection time and the average values for both types of stroke. These studies shown, that the change of the position of the cylinder will cause changes in the formation of fuel-air mixture and thus changes in the combustion process. Based on the results of the work of simulation and experiments was possible to develop individual algorithms for ignition control. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: radial engine, ignition system, non-uniformity, combustion process

Procedia PDF Downloads 347
94 Comparative Characteristics of Bacteriocins from Endemic Lactic Acid Bacteria

Authors: K. Karapetyan, F. Tkhruni, A. Aghajanyan, T. S. Balabekyan, L. Arstamyan

Abstract:

Introduction: Globalization of the food supply has created the conditions favorable for the emergence and spread of food-borne and especially dangerous pathogens (EDP) in developing countries. The fresh-cut fruit and vegetable industry is searching for alternatives to replace chemical treatments with biopreservative approaches that ensure the safety of the processed foods product. Antimicrobial compounds of lactic acid bacteria (LAB) possess bactericidal or bacteriostatic activity against intestinal pathogens, spoilage organisms and food-borne pathogens such as Listeria monocytogenes, Staphylococcus aureus and Salmonella. Endemic strains of LAB were isolated. The strains, showing broad spectrum of antimicrobial activity against food spoiling microorganisms, were selected. The genotyping by 16S rRNA sequencing, GS-PCR, RAPD PCR methods showed that they were presented by Lactobacillus rhamnosus109, L.plantarum 65, L.plantarum 66 and Enterococcus faecium 64 species. LAB are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB strains were isolated from different dairy products from rural households from the highland regions of Armenia. Serially diluted samples were spread on MRS (Merck, Germany) and hydrolyzed milk agar (1,2 % w/v). Single colonies from each LAB were individually inoculated in liquid MRS medium and incubated at 37oC for 24 hours. Culture broth with biomass was centrifuged at 10,000 g during 20 min for obtaining of cell free culture broth (CFC). The antimicrobial substances from CFC broth were purified by the combination of adsorption-desorption and ion-exchange chromatography methods. Separation of bacteriocins was performed using a HPLC method on "Avex ODS" C18 column. Mass analysis of peptides recorded on the device API 4000 in the electron ionization mode. The spot-on-lawn method on the test culture plated in the solid medium was applied. The antimicrobial activity is expressed in arbitrary units (AU/ml). Results. Purification of CFC broth of LAB allowed to obtain partially purified antimicrobial preparations which contains bacteriocins with broad spectrum of antimicrobial activity. Investigation of their main biochemical properties shown, that inhibitory activity of preparations is partially reduced after treatment with proteinase K, trypsin, pepsin, suggesting a proteinaceous nature of bacteriocin-like substances containing in CFC broth. Preparations preserved their activity after heat treatment (50-121 oC, 20 min) and were stable in the pH range 3–8. The results of SDS PAAG electrophoresis show that L.plantarum 66 and Ent.faecium 64 strains have one bacteriocin (BCN) with maximal antimicrobial activity with approximate molecular weight 2.0-3.0 kDa. From L.rhamnosus 109 two BCNs were obtained. Mass spectral analysis indicates that these bacteriocins have peptide bonds and molecular weight of BCN 1 and BCN 2 are approximately 1.5 kDa and 700 Da. Discussion: Thus, our experimental data shown, that isolated endemic strains of LAB are able to produce bacteriocins with high and different inhibitory activity against broad spectrum of microorganisms of different taxonomic group, such as Salmonella sp., Esherichia coli, Bacillus sp., L.monocytogenes, Proteus mirabilis, Staph. aureus, Ps. aeruginosa. Obtained results proved the perspectives for use of endemic strains in the preservation of foodstuffs. Acknowledgments: This work was realized with financial support of the Project Global Initiatives for Preliferation Prevention (GIPP) T2- 298, ISTC A-1866.

Keywords: antimicrobial activity, bacteriocins, endemic strains, food safety

Procedia PDF Downloads 546
93 Networks, Regulations and Public Action: The Emerging Experiences of Sao Paulo

Authors: Lya Porto, Giulia Giacchè, Mario Aquino Alves

Abstract:

The paper aims to describe the linkage between government and civil society proposing a study on agro-ecological agriculture policy and urban action in São Paulo city underling the main achievements obtained. The negotiation processes between social movements and the government (inputs) and its results on political regulation and public action for Urban Agriculture (UA) in São Paulo city (outputs) have been investigated. The method adopted is qualitative, with techniques of semi-structured interviews, participant observation, and documental analysis. The authors conducted 30 semi-structured interviews with organic farmers, activists, governmental and non-governmental managers. Participant observation was conducted in public gardens, urban farms, public audiences, democratic councils, and social movements meetings. Finally, public plans and laws were also analyzed. São Paulo city with around 12 million inhabitants spread out in a 1522 km2 is the economic capital of Brazil, marked by spatial and socioeconomic segregation, currently aggravated by environmental crisis, characterized by water scarcity, pollution, and climate changes. In recent years, Urban Agriculture (UA) social movements gained strength and struggle for a different city with more green areas, organic food production, and public occupation. As the dynamics of UA occurs by the action of multiple actresses and institutions that struggle to build multiple senses on UA, the analysis will be based on literature about solidarity economy, governance, public action and networks. Those theories will mark out the analysis that will emphasize the approach of inter-subjectivity built between subjects, as well as the hybrid dynamics of multiple actors and spaces in the construction of policies for UA. Concerning UA we identified four main typologies based on land ownership, main function (economic or activist), form of organization of the space, and type of production (organic or not). The City Hall registers 500 productive unities of agriculture, with around 1500 producers, but researcher estimated a larger number of unities. Concerning the social movements we identified three categories that differ in goals and types of organization, but all of them work by networks of activists and/or organizations. The first category does not consider themselves as a movement, but a network. They occupy public spaces to grow organic food and to propose another type of social relations in the city. This action is similar to what became known as the green guerrillas. The second is configured as a movement that is structured to raise awareness about agro-ecological activities. The third one is a network of social movements, farmers, organizations and politicians that work focused on pressure and negotiation with executive and legislative government to approve regulations and policies on organic and agro-ecological Urban Agriculture. We conclude by highlighting how the interaction among institutions and civil society produced important achievements for recognition and implementation of UA within the city. Some results of this process are awareness for local production, legal and institutional recognition of the rural zone around the city into the planning tool, the investment on organic school public procurements, the establishment of participatory management of public squares, the inclusion of UA on Municipal Strategic Plan and Master Plan.

Keywords: public action, policies, agroecology, urban and peri-urban agriculture, Sao Paulo

Procedia PDF Downloads 273
92 Examining Influence of The Ultrasonic Power and Frequency on Microbubbles Dynamics Using Real-Time Visualization of Synchrotron X-Ray Imaging: Application to Membrane Fouling Control

Authors: Masoume Ehsani, Ning Zhu, Huu Doan, Ali Lohi, Amira Abdelrasoul

Abstract:

Membrane fouling poses severe challenges in membrane-based wastewater treatment applications. Ultrasound (US) has been considered an effective fouling remediation technique in filtration processes. Bubble cavitation in the liquid medium results from the alternating rarefaction and compression cycles during the US irradiation at sufficiently high acoustic pressure. Cavitation microbubbles generated under US irradiation can cause eddy current and turbulent flow within the medium by either oscillating or discharging energy to the system through microbubble explosion. Turbulent flow regime and shear forces created close to the membrane surface cause disturbing the cake layer and dislodging the foulants, which in turn improve the cleaning efficiency and filtration performance. Therefore, the number, size, velocity, and oscillation pattern of the microbubbles created in the liquid medium play a crucial role in foulant detachment and permeate flux recovery. The goal of the current study is to gain in depth understanding of the influence of the US power intensity and frequency on the microbubble dynamics and its characteristics generated under US irradiation. In comparison with other imaging techniques, the synchrotron in-line Phase Contrast Imaging technique at the Canadian Light Source (CLS) allows in-situ observation and real-time visualization of microbubble dynamics. At CLS biomedical imaging and therapy (BMIT) polychromatic beamline, the effective parameters were optimized to enhance the contrast gas/liquid interface for the accuracy of the qualitative and quantitative analysis of bubble cavitation within the system. With the high flux of photons and the high-speed camera, a typical high projection speed was achieved; and each projection of microbubbles in water was captured in 0.5 ms. ImageJ software was used for post-processing the raw images for the detailed quantitative analyses of microbubbles. The imaging has been performed under the US power intensity levels of 50 W, 60 W, and 100 W, in addition to the US frequency levels of 20 kHz, 28 kHz, and 40 kHz. For the duration of 2 seconds of imaging, the effect of the US power and frequency on the average number, size, and fraction of the area occupied by bubbles were analyzed. Microbubbles’ dynamics in terms of their velocity in water was also investigated. For the US power increase of 50 W to 100 W, the average bubble number and the average bubble diameter were increased from 746 to 880 and from 36.7 µm to 48.4 µm, respectively. In terms of the influence of US frequency, a fewer number of bubbles were created at 20 kHz (average of 176 bubbles rather than 808 bubbles at 40 kHz), while the average bubble size was significantly larger than that of 40 kHz (almost seven times). The majority of bubbles were captured close to the membrane surface in the filtration unit. According to the study observations, membrane cleaning efficiency is expected to be improved at higher US power and lower US frequency due to the higher energy release to the system by increasing the number of bubbles or growing their size during oscillation (optimum condition is expected to be at 20 kHz and 100 W).

Keywords: bubble dynamics, cavitational bubbles, membrane fouling, ultrasonic cleaning

Procedia PDF Downloads 131
91 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units

Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz

Abstract:

Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.

Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting

Procedia PDF Downloads 199
90 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 76
89 Exploring Safety Culture in Interventional Radiology: A Cross-Sectional Survey on Team Members' Attitudes

Authors: Anna Bjällmark, Victoria Persson, Bodil Karlsson, May Bazzi

Abstract:

Introduction: Interventional radiology (IR) is a continuously growing discipline that allows minimally invasive treatments of various medical conditions. The IR environment is, in several ways, comparable to the complex and accident-prone operation room (OR) environment. This implies that the IR environment may also be associated with various types of risks related to the work process and communication in the team. Patient safety is a central aspect of healthcare and involves the prevention and reduction of adverse events related to patient care. To maintain patient safety, it is crucial to build a safety culture where the staff are encouraged to report events and incidents that may have affected patient safety. It is also important to continuously evaluate the staff´s attitudes to patient safety. Despite the increasing number of IR procedures, research on the staff´s view regarding patients is lacking. Therefore, the main aim of the study was to describe and compare the IR team members' attitudes to patient safety. The secondary aim was to evaluate whether the WHO safety checklist was routinely used for IR procedures. Methods: An electronic survey was distributed to 25 interventional units in Sweden. The target population was the staff working in the IR team, i.e., physicians, radiographers, nurses, and assistant nurses. A modified version of the Safety Attitudes Questionnaire (SAQ) was used. Responses from 19 of 25 IR units (44 radiographers, 18 physicians, 5 assistant nurses, and 1 nurse) were received. The respondents rated their level of agreement for 27 items related to safety culture on a five-point Likert scale ranging from “Disagree strongly” to “Agree strongly.” Data were analyzed statistically using SPSS. The percentage of positive responses (PPR) was calculated by taking the percentage of respondents who got a scale score of 75 or higher. The respondents rated which corresponded to response options “Agree slightly” or “Agree strongly”. Thus, average scores ≥ 75% were classified as “positive” and average scores < 75% were classified as “non-positive”. Findings: The results indicated that the IR team had the highest factor scores and the highest percentages of positive responses in relation to job satisfaction (90/94%), followed by teamwork climate (85/92%). In contrast, stress recognition received the lowest ratings (54/25%). Attitudes related to these factors were relatively consistent between different professions, with only a few significant differences noted (Factor score: p=0.039 for job satisfaction, p=0.050 for working conditions. Percentage of positive responses: p=0.027 for perception of management). Radiographers tended to report slightly lower values compared to other professions for these factors (p<0.05). The respondents reported that the WHO safety checklist was not routinely used at their IR unit but acknowledged its importance for patient safety. Conclusion: This study reported high scores concerning job satisfaction and teamwork climate but lower scores concerning perception of management and stress recognition indicating that the latter are areas of improvement. Attitudes remained relatively consistent among the professions, but the radiographers reported slightly lower values in terms of job satisfaction and perception of the management. The WHO safety checklist was considered important for patient safety.

Keywords: interventional radiology, patient safety, safety attitudes questionnaire, WHO safety checklist

Procedia PDF Downloads 48
88 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 154
87 Magnetic Solid-Phase Separation of Uranium from Aqueous Solution Using High Capacity Diethylenetriamine Tethered Magnetic Adsorbents

Authors: Amesh P, Suneesh A S, Venkatesan K A

Abstract:

The magnetic solid-phase extraction is a relatively new method among the other solid-phase extraction techniques for the separating of metal ions from aqueous solutions, such as mine water and groundwater, contaminated wastes, etc. However, the bare magnetic particles (Fe3O4) exhibit poor selectivity due to the absence of target-specific functional groups for sequestering the metal ions. The selectivity of these magnetic particles can be remarkably improved by covalently tethering the task-specific ligands on magnetic surfaces. The magnetic particles offer a number of advantages such as quick phase separation aided by the external magnetic field. As a result, the solid adsorbent can be prepared with the particle size ranging from a few micrometers to the nanometer, which again offers the advantages such as enhanced kinetics of extraction, higher extraction capacity, etc. Conventionally, the magnetite (Fe3O4) particles were prepared by the hydrolysis and co-precipitation of ferrous and ferric salts in aqueous ammonia solution. Since the covalent linking of task-specific functionalities on Fe3O4 was difficult, and it is also susceptible to redox reaction in the presence of acid or alkali, it is necessary to modify the surface of Fe3O4 by silica coating. This silica coating is usually carried out by hydrolysis and condensation of tetraethyl orthosilicate over the surface of magnetite to yield a thin layer of silica-coated magnetite particles. Since the silica-coated magnetite particles amenable for further surface modification, it can be reacted with task-specific functional groups to obtain the functionalized magnetic particles. The surface area exhibited by such magnetic particles usually falls in the range of 50 to 150 m2.g-1, which offer advantage such as quick phase separation, as compared to the other solid-phase extraction systems. In addition, the magnetic (Fe3O4) particles covalently linked on mesoporous silica matrix (MCM-41) and task-specific ligands offer further advantages in terms of extraction kinetics, high stability, longer reusable cycles, and metal extraction capacity, due to the large surface area, ample porosity and enhanced number of functional groups per unit area on these adsorbents. In view of this, the present paper deals with the synthesis of uranium specific diethylenetriamine ligand (DETA) ligand anchored on silica-coated magnetite (Fe-DETA) as well as on magnetic mesoporous silica (MCM-Fe-DETA) and studies on the extraction of uranium from aqueous solution spiked with uranium to mimic the mine water or groundwater contaminated with uranium. The synthesized solid-phase adsorbents were characterized by FT-IR, Raman, TG-DTA, XRD, and SEM. The extraction behavior of uranium on the solid-phase was studied under several conditions like the effect of pH, initial concentration of uranium, rate of extraction and its variation with pH and initial concentration of uranium, effect of interference ions like CO32-, Na+, Fe+2, Ni+2, and Cr+3, etc. The maximum extraction capacity of 233 mg.g-1 was obtained for Fe-DETA, and a huge capacity of 1047 mg.g-1 was obtained for MCM-Fe-DETA. The mechanism of extraction, speciation of uranium, extraction studies, reusability, and the other results obtained in the present study suggests Fe-DETA and MCM-Fe-DETA are the potential candidates for the extraction of uranium from mine water, and groundwater.

Keywords: diethylenetriamine, magnetic mesoporous silica, magnetic solid-phase extraction, uranium extraction, wastewater treatment

Procedia PDF Downloads 144