Search results for: intersectional approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13890

Search results for: intersectional approach

240 Pre-Cancerigene Injuries Related to Human Papillomavirus: Importance of Cervicography as a Complementary Diagnosis Method

Authors: Denise De Fátima Fernandes Barbosa, Tyane Mayara Ferreira Oliveira, Diego Jorge Maia Lima, Paula Renata Amorim Lessa, Ana Karina Bezerra Pinheiro, Cintia Gondim Pereira Calou, Glauberto Da Silva Quirino, Hellen Lívia Oliveira Catunda, Tatiana Gomes Guedes, Nicolau Da Costa

Abstract:

The aim of this study is to evaluate the use of Digital Cervicography (DC) in the diagnosis of precancerous lesions related to Human Papillomavirus (HPV). Cross-sectional study with a quantitative approach, of evaluative type, held in a health unit linked to the Pro Dean of Extension of the Federal University of Ceará, in the period of July to August 2015 with a sample of 33 women. Data collecting was conducted through interviews with enforcement tool. Franco (2005) standardized the technique used for DC. Polymerase Chain Reaction (PCR) was performed to identify high-risk HPV genotypes. DC were evaluated and classified by 3 judges. The results of DC and PCR were classified as positive, negative or inconclusive. The data of the collecting instruments were compiled and analyzed by the software Statistical Package for Social Sciences (SPSS) with descriptive statistics and cross-references. Sociodemographic, sexual and reproductive variables were analyzed through absolute frequencies (N) and their respective percentage (%). Kappa coefficient (κ) was applied to determine the existence of agreement between the DC of reports among evaluators with PCR and also among the judges about the DC results. The Pearson's chi-square test was used for analysis of sociodemographic, sexual and reproductive variables with the PCR reports. It was considered statistically significant (p<0.05). Ethical aspects of research involving human beings were respected, according to 466/2012 Resolution. Regarding the socio-demographic profile, the most prevalent ages and equally were those belonging to the groups 21-30 and 41-50 years old (24.2%). The brown color was reported in excess (84.8%) and 96.9% out of them had completed primary and secondary school or studying. 51.5% were married, 72.7% Catholic, 54.5% employed and 48.5% with income between one and two minimum wages. As for the sexual and reproductive characteristics, prevailed heterosexual (93.9%) who did not use condoms during sexual intercourse (72.7%). 51.5% had a previous history of Sexually Transmitted Infection (STI), and HPV the most prevalent STI (76.5%). 57.6% did not use contraception, 78.8% underwent examination Cancer Prevention Uterus (PCCU) with shorter time interval or equal to one year, 72.7% had no cases of Cervical Cancer in the family, 63.6% were multiparous and 97% were not vaccinated against HPV. DC identified good level of agreement between raters (κ=0.542), had a specificity of 77.8% and sensitivity of 25% when compared their results with PCR. Only the variable race showed a statistically significant association with CRP (p=0.042). DC had 100% acceptance amongst women in the sample, revealing the possibility of other experiments in using this method so that it proves as a viable technique. The DC positivity criteria were developed by nurses and these professionals also perform PCCU in Brazil, which means that DC can be an important complementary diagnostic method for the appreciation of these professional’s quality of examinations.

Keywords: gynecological examination, human papillomavirus, nursing, papillomavirus infections, uterine lasmsneop

Procedia PDF Downloads 300
239 Academic Staff Identity and Emotional Labour: Exploring Pride, Motivation, and Relationships in Universities

Authors: Keith Schofield, Garry R. Prentice

Abstract:

The perceptions of the work an academic does, and the environment in which they do it, contributes to the professional identity of that academic. In turn, this has implications for the level of involvement they have in their job, their satisfaction, and their work product. This research explores academic identities in British and Irish institutions and considers the complex interplay between identity, practice, and participation. Theoretical assumptions made in this paper assert that meaningful work has positive effects on work pride, organisational commitment, organisational citizenship, and motivation; when employees participate enthusiastically they are likely to be more engaged, more successful, and more satisfied. Further examination is given to the context in which this participation happens; the nature of institutional process, management, and relationships with colleagues, team members, and students is considered. The present study follows a mixed-methods approach to explore work satisfaction constructs in a number of academic contexts in the UK and Ireland. The quantitative component of this research (Convenience Sample: 155 academics, and support/ administrative staff; 36.1% male, 63.9% female; 60.8% academic staff, 39.2% support/ administration staff; across a number of universities in the UK and Ireland) was based on an established emotional labour model and was tested across gender groups, job roles, and years of service. This was complimented by qualitative semi-structured interviews (Purposive Sample: 10 academics, and 5 support/ administrative staff across the same universities in the UK and Ireland) to examine various themes including values within academia, work conditions, professional development, and transmission of knowledge to students. Experiences from both academic and support perspectives were sought in order to gain a holistic view of academia and to provide an opportunity to explore the dynamic of the academic/administrator relationship within the broader institutional context. The quantitative emotional labour model, tested via a path analysis, provided a robust description of the relationships within the data. The significant relationships found within the quantitative emotional labour model included a link between non-expression of true feelings resulting in emotional labourious work and lower levels of intrinsic motivation and higher levels of extrinsic motivation. Higher levels of intrinsic motivation also linked positively to work pride. These findings were further explored in the qualitative elements of the research where themes emerged including the disconnection between faculty management and staff, personal fulfilment and the friction between the identities of teacher, researcher/ practitioner and administrator. The implications of the research findings from this study are combined and discussed in relation to possible identity-related and emotional labour management-related interventions. Further, suggestions are made to institutions concerning the application of these findings including the development of academic practices, with specific reference to the duality of identity required to service the combined teacher/ researcher role. Broader considerations of the paper include how individuals and institutions may engage with the changing nature of students-as-consumers as well as a recommendation to centralise personal fulfillment through the development of professional academic identities.

Keywords: academic work, emotional labour, identity friction, mixed methods

Procedia PDF Downloads 275
238 Chatbots vs. Websites: A Comparative Analysis Measuring User Experience and Emotions in Mobile Commerce

Authors: Stephan Boehm, Julia Engel, Judith Eisser

Abstract:

During the last decade communication in the Internet transformed from a broadcast to a conversational model by supporting more interactive features, enabling user generated content and introducing social media networks. Another important trend with a significant impact on electronic commerce is a massive usage shift from desktop to mobile devices. However, a presentation of product- or service-related information accumulated on websites, micro pages or portals often remains the pivot and focal point of a customer journey. A more recent change of user behavior –especially in younger user groups and in Asia– is going along with the increasing adoption of messaging applications supporting almost real-time but asynchronous communication on mobile devices. Mobile apps of this type cannot only provide an alternative for traditional one-to-one communication on mobile devices like voice calls or short messaging service. Moreover, they can be used in mobile commerce as a new marketing and sales channel, e.g., for product promotions and direct marketing activities. This requires a new way of customer interaction compared to traditional mobile commerce activities and functionalities provided based on mobile web-sites. One option better aligned to the customer interaction in mes-saging apps are so-called chatbots. Chatbots are conversational programs or dialog systems simulating a text or voice based human interaction. They can be introduced in mobile messaging and social media apps by using rule- or artificial intelligence-based imple-mentations. In this context, a comparative analysis is conducted to examine the impact of using traditional websites or chatbots for promoting a product in an impulse purchase situation. The aim of this study is to measure the impact on the customers’ user experi-ence and emotions. The study is based on a random sample of about 60 smartphone users in the group of 20 to 30-year-olds. Participants are randomly assigned into two groups and participate in a traditional website or innovative chatbot based mobile com-merce scenario. The chatbot-based scenario is implemented by using a Wizard-of-Oz experimental approach for reasons of sim-plicity and to allow for more flexibility when simulating simple rule-based and more advanced artificial intelligence-based chatbot setups. A specific set of metrics is defined to measure and com-pare the user experience in both scenarios. It can be assumed, that users get more emotionally involved when interacting with a system simulating human communication behavior instead of browsing a mobile commerce website. For this reason, innovative face-tracking and analysis technology is used to derive feedback on the emotional status of the study participants while interacting with the website or the chatbot. This study is a work in progress. The results will provide first insights on the effects of chatbot usage on user experiences and emotions in mobile commerce environments. Based on the study findings basic requirements for a user-centered design and implementation of chatbot solutions for mobile com-merce can be derived. Moreover, first indications on situations where chatbots might be favorable in comparison to the usage of traditional website based mobile commerce can be identified.

Keywords: chatbots, emotions, mobile commerce, user experience, Wizard-of-Oz prototyping

Procedia PDF Downloads 458
237 Skin-to-Skin Contact Simulation: Improving Health Outcomes for Medically Fragile Newborns in the Neonatal Intensive Care Unit

Authors: Gabriella Zarlenga, Martha L. Hall

Abstract:

Introduction: Premature infants are at risk for neurodevelopmental deficits and hospital readmissions, which can increase the financial burden on the health care system and families. Kangaroo care (skin-to-skin contact) is a practice that can improve preterm infant health outcomes. Preterm infants can acquire adequate body temperature, heartbeat, and breathing regulation through lying directly on the mother’s abdomen and in between her breasts. Due to some infant’s condition, kangaroo care is not a feasible intervention. The purpose of this proof-of-concept research project is to create a device which simulates skin-to-skin contact for pre-term infants not eligible for kangaroo care, with the aim of promoting baby’s health outcomes, reducing the incidence of serious neonatal and early childhood illnesses, and/or improving cognitive, social and emotional aspects of development. Methods: The study design is a proof-of-concept based on a three-phase approach; (1) observational study and data analysis of the standard of care for 2 groups of pre-term infants, (2) design and concept development of a novel device for pre-term infants not currently eligible for standard kangaroo care, and (3) prototyping, laboratory testing, and evaluation of the novel device in comparison to current assessment parameters of kangaroo care. A single center study will be conducted in an area hospital offering Level III neonatal intensive care. Eligible participants include newborns born premature (28-30 weeks of age) admitted to the NICU. The study design includes 2 groups: a control group receiving standard kangaroo care and an experimental group not eligible for kangaroo care. Based on behavioral analysis of observational video data collected in the NICU, the device will be created to simulate mother’s body using electrical components in a thermoplastic polymer housing covered in silicone. It will be designed with a microprocessor that controls simulated respiration, heartbeat, and body temperature of the 'simulated caregiver' by using a pneumatic lung, vibration sensors (heartbeat), pressure sensors (weight/position), and resistive film to measure temperature. A slight contour of the simulator surface may be integrated to help position the infant correctly. Control and monitoring of the skin-to-skin contact simulator would be performed locally by an integrated touchscreen. The unit would have built-in Wi-Fi connectivity as well as an optional Bluetooth connection in which the respiration and heart rate could be synced with a parent or caregiver. A camera would be integrated, allowing a video stream of the infant in the simulator to be streamed to a monitoring location. Findings: Expected outcomes are stabilization of respiratory and cardiac rates, thermoregulation of those infants not eligible for skin to skin contact with their mothers, and real time mother Bluetooth to the device to mimic the experience in the womb. Results of this study will benefit clinical practice by creating a new standard of care for premature neonates in the NICU that are deprived of skin to skin contact due to various health restrictions.

Keywords: kangaroo care, wearable technology, pre-term infants, medical design

Procedia PDF Downloads 156
236 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 210
235 Provotyping Futures Through Design

Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli

Abstract:

Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.

Keywords: speculative design, provotypes, design knowledge, political theory

Procedia PDF Downloads 131
234 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 110
233 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing

Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas

Abstract:

This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.

Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language

Procedia PDF Downloads 404
232 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 113
231 Technological Transference Tools to Diffuse Low-Cost Earthquake Resistant Construction with Adobe in Rural Areas of the Peruvian Andes

Authors: Marcial Blondet, Malena Serrano, Álvaro Rubiños, Elin Mattsson

Abstract:

In Peru, there are more than two million houses made of adobe (sun dried mud bricks) or rammed earth (35% of the total houses), in which almost 9 million people live, mainly because they cannot afford to purchase industrialized construction materials. Although adobe houses are cheap to build and thermally comfortable, their seismic performance is very poor, and they usually suffer significant damage or collapse with tragic loss of life. Therefore, over the years, researchers at the Pontifical Catholic University of Peru and other institutions have developed many reinforcement techniques as an effort to improve the structural safety of earthen houses located in seismic areas. However, most rural communities live under unacceptable seismic risk conditions because these techniques have not been adopted massively, mainly due to high cost and lack of diffusion. The nylon rope mesh reinforcement technique is simple and low-cost, and two technological transference tools have been developed to diffuse it among rural communities: 1) Scale seismic simulations using a portable shaking table have been designed to prove its effectiveness to protect adobe houses; 2) A step-by-step illustrated construction manual has been developed to guide the complete building process of a nylon rope mesh reinforced adobe house. As a study case, it was selected the district of Pullo: a small rural community in the Peruvian Andes where more than 80% of its inhabitants live in adobe houses and more than 60% are considered to live in poverty or extreme poverty conditions. The research team carried out a one-day workshop in May 2015 and a two-day workshop in September 2015. Results were positive: First, the nylon rope mesh reinforcement procedure was proven simple enough to be replicated by adults, both young and seniors, and participants handled ropes and knots easily as they use them for daily livestock activity. In addition, nylon ropes were proven highly available in the study area as they were found at two local stores in variety of color and size.. Second, the portable shaking table demonstration successfully showed the effectiveness of the nylon rope mesh reinforcement and generated interest on learning about it. On the first workshop, more than 70% of the participants were willing to formally subscribe and sign up for practical training lessons. On the second workshop, more than 80% of the participants returned the second day to receive introductory practical training. Third, community members found illustrations on the construction manual simple and friendly but the roof system illustrations led to misinterpretation so they were improved. The technological transfer tools developed in this project can be used to train rural dwellers on earthquake-resistant self-construction with adobe, which is still very common in the Peruvian Andes. This approach would allow community members to develop skills and capacities to improve safety of their households on their own, thus, mitigating their high seismic risk and preventing tragic losses. Furthermore, proper training in earthquake-resistant self-construction with adobe would prevent rural dwellers from depending on external aid after an earthquake and become agents of their own development.

Keywords: adobe, Peruvian Andes, safe housing, technological transference

Procedia PDF Downloads 293
230 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling

Authors: Danlei Yang, Luofeng Huang

Abstract:

The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.

Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence

Procedia PDF Downloads 6
229 Diamond-Like Carbon-Based Structures as Functional Layers on Shape-Memory Alloy for Orthopedic Applications

Authors: Piotr Jablonski, Krzysztof Mars, Wiktor Niemiec, Agnieszka Kyziol, Marek Hebda, Halina Krawiec, Karol Kyziol

Abstract:

NiTi alloys, possessing unique mechanical properties such as pseudoelasticity and shape memory effect (SME), are suitable for many applications, including implanthology and biomedical devices. Additionally, these alloys have similar values of elastic modulus to those of human bones, what is very important in orthopedics. Unfortunately, the environment of physiological fluids in vivo causes unfavorable release of Ni ions, which in turn may lead to metalosis as well as allergic reactions and toxic effects in the body. For these reasons, the surface properties of NiTi alloys should be improved to increase corrosion resistance, taking into account biological properties, i.e. excellent biocompatibility. The prospective in this respect are layers based on DLC (Diamond-Like Carbon) structures, which are an attractive solution for many applications in implanthology. These coatings (DLC), usually obtained by PVD (Physical Vapour Deposition) and PA CVD (Plasma Activated Chemical Vapour Deposition) methods, can be also modified by doping with other elements like silicon, nitrogen, oxygen, fluorine, titanium and silver. These methods, in combination with a suitably designed structure of the layers, allow the possibility co-decide about physicochemical and biological properties of modified surfaces. Mentioned techniques provide specific physicochemical properties of substrates surface in a single technological process. In this work, the following types of layers based on DLC structures (incl. Si-DLC or Si/N-DLC) were proposed as prospective and attractive approach in surface functionalization of shape memory alloy. Nitinol substrates were modified in plasma conditions, using RF CVD (Radio Frequency Chemical Vapour Deposition). The influence of plasma treatment on the useful properties of modified substrates after deposition DLC layers doped with silica and/or nitrogen atoms, as well as only pre-treated in O2 NH3 plasma atmosphere in a RF reactor was determined. The microstructure and topography of the modified surfaces were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Furthermore, the atomic structure of coatings was characterized by IR and Raman spectroscopy. The research also included the evaluation of surface wettability, surface energy as well as the characteristics of selected mechanical and biological properties of the layers. In addition, the corrosion properties of alloys after and before modification in the physiological saline were also investigated. In order to determine the corrosion resistance of NiTi in the Ringer solution, the potentiodynamic polarization curves (LSV – Linear Sweep Voltamperometry) were plotted. Furthermore, the evolution of corrosion potential versus immersion time of TiNi alloy in Ringer solution was performed. Based on all carried out research, the usefullness of proposed modifications of nitinol for medical applications was assessed. It was shown, inter alia, that the obtained Si-DLC layers on the surface of NiTi alloy exhibit a characteristic complex microstructure, increased surface development, which is an important aspect in improving the osteointegration of an implant. Furthermore, the modified alloy exhibits biocompatibility, the transfer of the metal (Ni, Ti) to Ringer’s solution is clearly limited.

Keywords: bioactive coatings, corrosion resistance, doped DLC structure, NiTi alloy, RF CVD

Procedia PDF Downloads 235
228 Probability Modeling and Genetic Algorithms in Small Wind Turbine Design Optimization: Mentored Interdisciplinary Undergraduate Research at LaGuardia Community College

Authors: Marina Nechayeva, Malgorzata Marciniak, Vladimir Przhebelskiy, A. Dragutan, S. Lamichhane, S. Oikawa

Abstract:

This presentation is a progress report on a faculty-student research collaboration at CUNY LaGuardia Community College (LaGCC) aimed at designing a small horizontal axis wind turbine optimized for the wind patterns on the roof of our campus. Our project combines statistical and engineering research. Our wind modeling protocol is based upon a recent wind study by a faculty-student research group at MIT, and some of our blade design methods are adopted from a senior engineering project at CUNY City College. Our use of genetic algorithms has been inspired by the work on small wind turbines’ design by David Wood. We combine these diverse approaches in our interdisciplinary project in a way that has not been done before and improve upon certain techniques used by our predecessors. We employ several estimation methods to determine the best fitting parametric probability distribution model for the local wind speed data obtained through correlating short-term on-site measurements with a long-term time series at the nearby airport. The model serves as a foundation for engineering research that focuses on adapting and implementing genetic algorithms (GAs) to engineering optimization of the wind turbine design using Blade Element Momentum Theory. GAs are used to create new airfoils with desirable aerodynamic specifications. Small scale models of best performing designs are 3D printed and tested in the wind tunnel to verify the accuracy of relevant calculations. Genetic algorithms are applied to selected airfoils to determine the blade design (radial cord and pitch distribution) that would optimize the coefficient of power profile of the turbine. Our approach improves upon the traditional blade design methods in that it lets us dispense with assumptions necessary to simplify the system of Blade Element Momentum Theory equations, thus resulting in more accurate aerodynamic performance calculations. Furthermore, it enables us to design blades optimized for a whole range of wind speeds rather than a single value. Lastly, we improve upon known GA-based methods in that our algorithms are constructed to work with XFoil generated airfoils data which enables us to optimize blades using our own high glide ratio airfoil designs, without having to rely upon available empirical data from existing airfoils, such as NACA series. Beyond its immediate goal, this ongoing project serves as a training and selection platform for CUNY Research Scholars Program (CRSP) through its annual Aerodynamics and Wind Energy Research Seminar (AWERS), an undergraduate summer research boot camp, designed to introduce prospective researchers to the relevant theoretical background and methodology, get them up to speed with the current state of our research, and test their abilities and commitment to the program. Furthermore, several aspects of the research (e.g., writing code for 3D printing of airfoils) are adapted in the form of classroom research activities to enhance Calculus sequence instruction at LaGCC.

Keywords: engineering design optimization, genetic algorithms, horizontal axis wind turbine, wind modeling

Procedia PDF Downloads 231
227 3D Seismic Acquisition Challenges in the NW Ghadames Basin Libya, an Integrated Geophysical Sedimentological and Subsurface Studies Approach as a Solution

Authors: S. Sharma, Gaballa Aqeelah, Tawfig Alghbaili, Ali Elmessmari

Abstract:

There were abrupt discontinuities in the Brute Stack in the northernmost locations during the acquisition of 2D (2007) and 3D (2021) seismic data in the northwest region of the Ghadames Basin, Libya. In both campaigns, complete fluid circulation loss was seen in these regions during up-hole drilling. Geophysics, sedimentology and shallow subsurface geology were all integrated to look into what was causing the seismic signal to disappear at shallow depths. The Upper Cretaceous Nalut Formation is the near-surface or surface formation in the studied area. It is distinguished by abnormally high resistivity in all the neighboring wells. The Nalut Formation in all the nearby wells from the present study and previous outcrop study suggests lithology of dolomite and chert/flint in nodular or layered forms. There are also reports of karstic caverns, vugs, and thick cracks, which all work together to produce the high resistivity. Four up-hole samples that were analyzed for microfacies revealed a near-coastal to tidal environment. Algal (Chara) infested deposits up to 30 feet thick and monotonous, very porous, are seen in two up-hole sediments; these deposits are interpreted to be scattered, continental algal travertine mounds. Chert/flint, dolomite, and calcite in varying amounts are confirmed by XRD analysis. Regional tracking of the high resistivity of the Nalut Formation, which is thought to be connected to the sea level drop that created the paleokarst layer, is possible. It is abruptly overlain by a blanket marine transgressive deposit caused by rapid sea level rise, which is a regional, relatively high radioactive layer of argillaceous limestone. The examined area's close proximity to the mountainous, E-W trending ridges of northern Libya made it easier for recent freshwater circulation, which later enhanced cavern development and mineralization in the paleokarst layer. Seismic signal loss at shallow depth is caused by extremely heterogeneous mineralogy of pore- filling or lack thereof. Scattering effect of shallow karstic layer on seismic signal has been well documented. Higher velocity inflection points at shallower depths in the northern part and deeper intervals in the southern part, in both cases at Nalut level, demonstrate the layer's influence on the seismic signal. During the Permian-Carboniferous, the Ghadames Basin underwent uplift and extensive erosion, which resulted in this karstic layer of the Nalut Formation uplifted to a shallow depth in the northern part of the studied area weakening the acoustic signal, whereas in the southern part of the 3D acquisition area the Nalut Formation remained at the deeper interval without affecting the seismic signal. Results from actions taken during seismic processing to deal with this signal loss are visible and have improved. This study recommends using denser spacing or dynamite to circumvent the karst layer in a comparable geographic area in order to prevent signal loss at lesser depths.

Keywords: well logging, seismic data acquisition, sesimic data processing, up-holes

Procedia PDF Downloads 85
226 Environmental Planning for Sustainable Utilization of Lake Chamo Biodiversity Resources: Geospatially Supported Approach, Ethiopia

Authors: Alemayehu Hailemicael Mezgebe, A. J. Solomon Raju

Abstract:

Context: Lake Chamo is a significant lake in the Ethiopian Rift Valley, known for its diversity of wildlife and vegetation. However, the lake is facing various threats due to human activities and global effects. The poor management of resources could lead to food insecurity, ecological degradation, and loss of biodiversity. Research Aim: The aim of this study is to analyze the environmental implications of lake level changes using GIS and remote sensing. The research also aims to examine the floristic composition of the lakeside vegetation and propose spatially oriented environmental planning for the sustainable utilization of the biodiversity resources. Methodology: The study utilizes multi-temporal satellite images and aerial photographs to analyze the changes in the lake area over the past 45 years. Geospatial analysis techniques are employed to assess land use and land cover changes and change detection matrix. The composition and role of the lakeside vegetation in the ecological and hydrological functions are also examined. Findings: The analysis reveals that the lake has shrunk by 14.42% over the years, with significant modifications to its upstream segment. The study identifies various threats to the lake-wetland ecosystem, including changes in water chemistry, overfishing, and poor waste management. The study also highlights the impact of human activities on the lake's limnology, with an increase in conductivity, salinity, and alkalinity. Floristic composition analysis of the lake-wetland ecosystem showed definite pattern of the vegetation distribution. The vegetation composition can be generally categorized into three belts namely, the herbaceous belt, the legume belt and the bush-shrub-small trees belt. The vegetation belts collectively act as different-sized sieve screen system and calm down the pace of incoming foreign matter. This stratified vegetation provides vital information to decide the management interventions for the sustainability of lake-wetland ecosystem.Theoretical Importance: The study contributes to the understanding of the environmental changes and threats faced by Lake Chamo. It provides insights into the impact of human activities on the lake-wetland ecosystem and emphasizes the need for sustainable resource management. Data Collection and Analysis Procedures: The study utilizes aerial photographs, satellite imagery, and field observations to collect data. Geospatial analysis techniques are employed to process and analyze the data, including land use/land cover changes and change detection matrices. Floristic composition analysis is conducted to assess the vegetation patterns Question Addressed: The study addresses the question of how lake level changes and human activities impact the environmental health and biodiversity of Lake Chamo. It also explores the potential opportunities and threats related to water utilization and waste management. Conclusion: The study recommends the implementation of spatially oriented environmental planning to ensure the sustainable utilization and maintenance of Lake Chamo's biodiversity resources. It emphasizes the need for proper waste management, improved irrigation facilities, and a buffer zone with specific vegetation patterns to restore and protect the lake outskirt.

Keywords: buffer zone, geo-spatial, lake chamo, lake level changes, sustainable utilization

Procedia PDF Downloads 87
225 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 117
224 National Core Indicators - Aging and Disabilities: A Person-Centered Approach to Understanding Quality of Long-Term Services and Supports

Authors: Stephanie Giordano, Rosa Plasencia

Abstract:

In the USA, in 2013, public service systems such as Medicaid, aging, and disability systems undertook an effort to measure the quality of service delivery by examining the experiences and outcomes of those receiving public services. The goal of this effort was to develop a survey to measure the experiences and outcomes of those receiving public services, with the goal of measuring system performance for quality improvement. The performance indicators were developed through with input from directors of state aging and disability service systems, along with experts and stakeholders in the field across the United States. This effort, National Core Indicators –Aging and Disabilities (NCI-AD), grew out of National Core Indicators –Intellectual and Developmental Disabilities, an effort to measure developmental disability (DD) systems across the States. The survey tool and administration protocol underwent multiple rounds of testing and revision between 2013 and 2015. The measures in the final tool – called the Adult Consumer Survey (ACS) – emphasize not just important indicators of healthcare access and personal safety but also includes indicators of system quality based on person-centered outcomes. These measures indicate whether service systems support older adults and people with disabilities to live where they want, maintain relationships and engage in their communities and have choice and control in their everyday lives. Launched in 2015, the NCI-AD Adult Consumer Survey is now used in 23 states in the US. Surveys are conducted by NCI-AD trained surveyors via direct conversation with a person receiving public long-term services and supports (LTSS). Until 2020, surveys were only conducted in person. However, after a pilot to test the reliability of videoconference and telephone survey modes, these modes were adopted as an acceptable practice. The nature of the survey is that of a “guided conversation” survey administration allows for surveyor to use wording and terminology that is best understand by the person surveyed. The survey includes a subset of questions that may be answered by a proxy respondent who knows the person well if the person is receiving services in unable to provide valid responses on their own. Surveyors undergo a standardized training on survey administration to ensure the fidelity of survey administration. In addition to the main survey section, a Background Information section collects data on personal and service-related characteristics of the person receiving services; these data are typically collected through state administrative record. This information is helps provide greater context around the characteristics of people receiving services. It has also been used in conjunction with outcomes measures to look at disparity (including by race and ethnicity, gender, disability, and living arrangements). These measures of quality are critical for public service delivery systems to understand the unique needs of the population of older adults and improving the lives of older adults as well as people with disabilities. Participating states may use these data to identify areas for quality improvement within their service delivery systems, to advocate for specific policy change, and to better understand the experiences of specific populations of people served.

Keywords: quality of life, long term services and supports, person-centered practices, aging and disability research, survey methodology

Procedia PDF Downloads 120
223 Crisis In/Out, Emergent, and Adaptive Urban Organisms

Authors: Alessandra Swiny, Michalis Georgiou, Yiorgos Hadjichristou

Abstract:

This paper focuses on the questions raised through the work of Unit 5: ‘In/Out of crisis, emergent and adaptive’; an architectural research-based studio at the University of Nicosia. It focusses on sustainable architectural and urban explorations tackling with the ever growing crises in its various types, phases and locations. ‘Great crisis situations’ are seen as ‘great chances’ that trigger investigations for further development and evolution of the built environment in an ultimate sustainable approach. The crisis is taken as an opportunity to rethink the urban and architectural directions as new forces for inventions leading to emergent and adaptive built environments. The Unit 5’s identity and environment facilitates the students to respond optimistically, alternatively and creatively towards the global current crisis. Mark Wigley’s notion that “crises are ultimately productive” and “They force invention” intrigued and defined the premises of the Unit. ‘Weather and nature are coauthors of the built environment’ Jonathan Hill states in his ‘weather architecture’ discourse. The weather is constantly changing and new environments, the subnatures are created which derived from the human activities David Gissen explains. The above set of premises triggered innovative responses by the Unit’s students. They thoroughly investigated the various kinds of crisis and their causes in relation to their various types of Terrains. The tools used for the research and investigation were chosen in contradictive pairs to generate further crisis situations: The re-used/salvaged competed with the new, the handmade rivalling with the fabrication, the analogue juxtaposed with digital. Students were asked to delve into state of art technologies in order to propose sustainable emergent and adaptive architectures and Urbanities, having though always in mind that the human and the social aspects of the community should be the core of the investigation. The resulting unprecedented spatial conditions and atmospheres of the emergent new ways of living are deemed to be the ultimate aim of the investigation. Students explored a variety of sites and crisis conditions such as: The vague terrain of the Green Line in Nicosia, the lost footprints of the sinking Venice, the endangered Australian coral reefs, the earthquake torn town of Crevalcore, and the decaying concrete urbanscape of Athens. Among other projects, ‘the plume project’ proposes a cloud-like, floating and almost dream-like living environment with unprecedented spatial conditions to the inhabitants of the coal mine of Centralia, USA, not just to enable them to survive but even to prosper in this unbearable environment due to the process of the captured plumes of smoke and heat. Existing water wells inspire inversed vertical structures creating a new living underground network, protecting the nomads from catastrophic sand storms in the Araoune of Mali. “Inverted utopia: Lost things in the sand”, weaves a series of tea-houses and a library holding lost artifacts and transcripts into a complex underground labyrinth by the utilization of the sand solidification technology. Within this methodology, crisis is seen as a mechanism for allowing an emergence of new and fascinating ultimate sustainable future cultures and cities.

Keywords: adaptive built environments, crisis as opportunity, emergent urbanities, forces for inventions

Procedia PDF Downloads 429
222 Mental Health Promotion for Children of Mentally Ill Parents in Schools. Assessment and Promotion of Teacher Mental Health Literacy in Order to Promote Child Related Mental Health (Teacher-MHL)

Authors: Dirk Bruland, Paulo Pinheiro, Ullrich Bauer

Abstract:

Introduction: Over 3 million children, about one quarter of all students, experience at least one parent with mental disorder in Germany every year. Children of mentally-ill parents are at considerably higher risk of developing serious mental health problems. The different burden patterns and coping attempts often become manifest in children's school lives. In this context, schools can have an important protective function, but can also create risk potentials. In reference to Jorm, pupil-related teachers’ mental health literacy (Teacher-MHL) includes the ability to recognize change behaviour, the knowledge of risk factors, the implementation of first aid intervention, and seeking professional help (teacher as gatekeeper). Although teachers’ knowledge and increased awareness of this topic is essential, the literature provides little information on the extent of teachers' abilities. As part of a German-wide research consortium on health literacy, this project, launched in March for 3 years, will conduct evidence-based mental health literacy research. The primary objective is to measure Teacher-MHL in the context of pupil-related psychosocial factors at primary and secondary schools (grades 5 & 6), while also focussing on children’s social living conditions. Methods: (1) A systematic literature review in different databases to identify papers with regard to Teacher-MHL (completed). (2) Based on these results, an interview guide was developed. This research step includes a qualitative pre-study to inductively survey the general profiles of teachers (n=24). The evaluation will be presented on the conference. (3) These findings will be translated into a quantitative teacher survey (n=2500) in order to assess the extent of socio-analytical skills of teachers as well as in relation to institutional and individual characteristics. (4) Based on results 1 – 3, developing a training program for teachers. Results: The review highlights a lack of information for Teacher-MHL and their skills, especially related to high-risk-groups like children of mentally ill parents. The literature is limited to a few studies only. According to these, teacher are not good at identifying burdened children and if they identify those children they do not know how to handle the situations in school. They are not sufficiently trained to deal with these children, especially there are great uncertainties in dealing with the teaching situation. Institutional means and resources are missing as well. Such a mismatch can result in insufficient support and use of opportunities for children at risk. First impressions from the interviews confirm these results and allow a greater insight in the everyday school-life according to critical life events in families. Conclusions: For the first time schools will be addressed as a setting where children are especially "accessible" for measures of health promotion. Addressing Teacher-MHL gives reason to expect high effectiveness. Targeting professionals' abilities for dealing with this high-risk-group leads to a discharge for teacher themselves to handle those situations and increases school health promotion. In view of the fact that only 10-30% of such high-risk families accept offers of therapy and assistance, this will be the first primary preventive and health-promoting approach to protect the health of a yet unaffected, but particularly burdened, high-risk group.

Keywords: children of mentally ill parents, health promotion, mental health literacy, school

Procedia PDF Downloads 544
221 A Proposed Framework for Better Managing Small Group Projects on an Undergraduate Foundation Programme at an International University Campus

Authors: Sweta Rout-Hoolash

Abstract:

Each year, selected students from around 20 countries begin their degrees at Middlesex University with the International Foundation Program (IFP), developing the skills required for academic study at a UK university. The IFP runs for 30 learning/teaching weeks at Middlesex University Mauritius Branch Campus, which is an international campus of UK’s Middlesex University. Successful IFP students join their degree courses already settled into life at their chosen campus (London, Dubai, Mauritius or Malta) and confident that they understand what is required for degree study. Although part of the School of Science and Technology, in Mauritius it prepares students for undergraduate level across all Schools represented on campus – including disciplines such as Accounting, Business, Computing, Law, Media and Psychology. The researcher has critically reviewed the framework and resources in the curriculum for a particular six week period of IFP study (dedicated group work phase). Despite working together closely for 24 weeks, IFP students approach the final 6 week small group work project phase with mainly inhibitive feelings. It was observed that students did not engage effectively in the group work exercise. Additionally, groups who seemed to be working well did not necessarily produce results reflecting effective collaboration, nor individual members’ results which were better than prior efforts. The researcher identified scope for change and innovation in the IFP curriculum and how group work is introduced and facilitated. The study explores the challenges of groupwork in the context of the Mauritius campus, though it is clear that the implications of the project are not restricted to one campus only. The presentation offers a reflective review on the previous structure put in place for the management of small group assessed projects on the programme from both the student and tutor perspective. The focus of the research perspective is the student voice, by taking into consideration past and present IFP students’ experiences as written in their learning journals. Further, it proposes the introduction of a revised framework to help students take greater ownership of the group work process in order to engage more effectively with the learning outcomes of this crucial phase of the programme. The study has critically reviewed recent and seminal literature on how to achieve greater student ownership during this phase especially under an environment of assessed multicultural group work. The presentation proposes several new approaches for encouraging students to take more control of the collaboration process. Detailed consideration is given to how the proposed changes impact on the work of other stakeholders, or partners to student learning. Clear proposals are laid out for evaluation of the different approaches intended to be implemented during the upcoming academic year (student voice through their own submitted reflections, focus group interviews and through the assessment results). The proposals presented are all realistic and have the potential to transform students’ learning. Furthermore, the study has engaged with the UK Professional Standards Framework for teaching and supporting learning in higher education, and demonstrates practice at the level of ‘fellow’ of the Higher Education Academy (HEA).

Keywords: collaborative peer learning, enhancing learning experiences, group work assessment, learning communities, multicultural diverse classrooms, studying abroad

Procedia PDF Downloads 327
220 About the State of Students’ Career Guidance in the Conditions of Inclusive Education in the Republic of Kazakhstan

Authors: Laura Butabayeva, Svetlana Ismagulova, Gulbarshin Nogaibayeva, Maiya Temirbayeva, Aidana Zhussip

Abstract:

Over the years of independence, Kazakhstan has not only ratified international documents regulating the rights of children to Inclusive education, but also developed its own inclusive educational policy. Along with this, the state pays particular attention to high school students' preparedness for professional self-determination. However, a number of problematic issues in this field have been revealed, such as the lack of systemic mechanisms coordinating stakeholders’ actions in preparing schoolchildren for a conscious choice of in-demand profession, meeting their individual capabilities and special educational needs (SEN). The analysis of the state’s current situation indicates school graduates’ adaptation to the labor market does not meet existing demands of the society. According to the Ministry of Labor and Social Protection of the Population of the Republic of Kazakhstan, about 70 % of Kazakhstani school graduates find themselves difficult to choose a profession, 87 % of schoolchildren make their career choice under the influence of parents and school teachers, 90 % of schoolchildren and their parents have no idea about the most popular professions on the market. The results of the study conducted by KorlanSyzdykova in 2016 indicated the urgent need of Kazakhstani school graduates in obtaining extensive information about in- demand professions and receiving professional assistance in choosing a profession in accordance with their individual skills, abilities, and preferences. The results of the survey, conducted by Information and Analytical Center among heads of colleges in 2020, showed that despite significant steps in creating conditions for students with SEN, they face challenges in studying because of poor career guidance provided to them in schools. The results of the study, conducted by the Center for Inclusive Education of the National Academy of Education named after Y. Altynsarin in the state’s general education schools in 2021, demonstrated the lack of career guidance, pedagogical and psychological support for children with SEN. To investigate these issues, the further study was conducted to examine the state of students’ career guidance and socialization, taking into account their SEN. The hypothesis of this study proposed that to prepare school graduates for a conscious career choice, school teachers and specialists need to develop their competencies in early identification of students' interests, inclinations, SEN and ensure necessary support for them. The state’s 5 regions were involved in the study according to the geographical location. The triangulation approach was utilized to ensure the credibility and validity of research findings, including both theoretical (analysis of existing statistical data, legal documents, results of previous research) and empirical (school survey for students, interviews with parents, teachers, representatives of school administration) methods. The data were analyzed independently and compared to each other. The survey included questions related to provision of pedagogical support for school students in making their career choice. Ethical principles were observed in the process of developing the methodology, collecting, analyzing the data and distributing the results. Based on the results, methodological recommendations on students’ career guidance for school teachers and specialists were developed, taking into account the former’s individual capabilities and SEN.

Keywords: career guidance, children with special educational needs, inclusive education, Kazakhstan

Procedia PDF Downloads 172
219 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 205
218 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 108
217 Micro-Oculi Facades as a Sustainable Urban Facade

Authors: Ok-Kyun Im, Kyoung Hee Kim

Abstract:

We live in an era that faces global challenges of climate changes and resource depletion. With the rapid urbanization and growing energy consumption in the built environment, building facades become ever more important in architectural practice and environmental stewardship. Furthermore, building facade undergoes complex dynamics of social, cultural, environmental and technological changes. Kinetic facades have drawn attention of architects, designers, and engineers in the field of adaptable, responsive and interactive architecture since 1980’s. Materials and building technologies have gradually evolved to address the technical implications of kinetic facades. The kinetic façade is becoming an independent system of the building, transforming the design methodology to sustainable building solutions. Accordingly, there is a need for a new design methodology to guide the design of a kinetic façade and evaluate its sustainable performance. The research objectives are two-fold: First, to establish a new design methodology for kinetic facades and second, to develop a micro-oculi façade system and assess its performance using the established design method. The design approach to the micro-oculi facade is comprised of 1) façade geometry optimization and 2) dynamic building energy simulation. The façade geometry optimization utilizes multi-objective optimization process, aiming to balance the quantitative and qualitative performances to address the sustainability of the built environment. The dynamic building energy simulation was carried out using EnergyPlus and Radiance simulation engines with scripted interfaces. The micro-oculi office was compared with an office tower with a glass façade in accordance with ASHRAE 90.1 2013 to understand its energy efficiency. The micro-oculi facade is constructed with an array of circular frames attached to a pair of micro-shades called a micro-oculus. The micro-oculi are encapsulated between two glass panes to protect kinetic mechanisms with longevity. The micro-oculus incorporates rotating gears that transmit the power to adjacent micro-oculi to minimize the number of mechanical parts. The micro-oculus rotates around its center axis with a step size of 15deg depending on the sun’s position while maximizing daylighting potentials and view-outs. A 2 ft by 2ft prototyping was undertaken to identify operational challenges and material implications of the micro-oculi facade. In this research, a systematic design methodology was proposed, that integrates multi-objectives of kinetic façade design criteria and whole building energy performance simulation within a holistic design process. This design methodology is expected to encourage multidisciplinary collaborations between designers and engineers to collaborate issues of the energy efficiency, daylighting performance and user experience during design phases. The preliminary energy simulation indicated that compared to a glass façade, the micro-oculi façade showed energy savings due to its improved thermal properties, daylighting attributes, and dynamic solar performance across the day and seasons. It is expected that the micro oculi façade provides a cost-effective, environmentally-friendly, sustainable, and aesthetically pleasing alternative to glass facades. Recommendations for future studies include lab testing to validate the simulated data of energy and optical properties of the micro-oculi façade. A 1:1 performance mock-up of the micro-oculi façade can suggest in-depth understanding of long-term operability and new development opportunities applicable for urban façade applications.

Keywords: energy efficiency, kinetic facades, sustainable architecture, urban facades

Procedia PDF Downloads 257
216 Secure Texting Used in a Post-Acute Pediatric Skilled Nursing Inpatient Setting: A Multidisciplinary Care Team Driven Communication System with Alarm and Alert Notification Management

Authors: Bency Ann Massinello, Nancy Day, Janet Fellini

Abstract:

Background: The use of an appropriate mode of communication among the multidisciplinary care team members regarding coordination of care is an extremely complicated yet important patient safety initiative. Effective communication among the team members(nursing staff, medical staff, respiratory therapists, rehabilitation therapists, patient-family services team…) become essential to develop a culture of trust and collaboration to deliver the highest quality care to patients are their families. The inpatient post-acute pediatrics, where children and their caregivers come for continuity of care, is no exceptions to the increasing use of text messages as a means to communication among clinicians. One such platform is the Vocera Communications (Vocera Smart Mobile App called Vocera Edge) allows the teams to use the application and share sensitive patient information through an encrypted platform using IOS company provided shared and assigned mobile devices. Objective: This paper discusses the quality initiative of implementing the transition from Vocera Smartbage to Vocera Edge Mobile App, technology advantage, use case expansion, and lessons learned about a secure alternative modality that allows sending and receiving secure text messages in a pediatric post-acute setting using an IOS device. This implementation process included all direct care staff, ancillary teams, and administrative teams on the clinical units. Methods: Our institution launched this transition from voice prompted hands-free Vocera Smartbage to Vocera Edge mobile based app for secure care team texting using a big bang approach during the first PDSA cycle. The pre and post implementation data was gathered using a qualitative survey of about 500 multidisciplinary team members to determine the ease of use of the application and its efficiency in care coordination. The technology was further expanded in its use by implementing clinical alerts and alarms notification using middleware integration with patient monitoring (Masimo) and life safety (Nurse call) systems. Additional use of the smart mobile iPhone use include pushing out apps like Lexicomp and Up to Date to have it readily available for users for evident-based practice in medication and disease management. Results: Successful implementation of the communication system in a shared and assigned model with all of the multidisciplinary teams in our pediatric post-acute setting. In just a 3-monthperiod post implementation, we noticed a 14% increase from 7,993 messages in 6 days in December 2020 to 9,116messages in March 2021. This confirmed that all clinical and non-clinical teams were using this mode of communication for coordinating the care for their patients. System generated data analytics used in addition to the pre and post implementation staff survey for process evaluation. Conclusion: A secure texting option using a mobile device is a safe and efficient mode for care team communication and collaboration using technology in real time. This allows for the settings like post-acute pediatric care areas to be in line with the widespread use of mobile apps and technology in our mainstream healthcare.

Keywords: nursing informatics, mobile secure texting, multidisciplinary communication, pediatrics post acute care

Procedia PDF Downloads 196
215 Embodied Empowerment: A Design Framework for Augmenting Human Agency in Assistive Technologies

Authors: Melina Kopke, Jelle Van Dijk

Abstract:

Persons with cognitive disabilities, such as Autism Spectrum Disorder (ASD) are often dependent on some form of professional support. Recent transformations in Dutch healthcare have spurred institutions to apply new, empowering methods and tools to enable their clients to cope (more) independently in daily life. Assistive Technologies (ATs) seem promising as empowering tools. While ATs can, functionally speaking, help people to perform certain activities without human assistance, we hold that, from a design-theoretical perspective, such technologies often fail to empower in a deeper sense. Most technologies serve either to prescribe or to monitor users’ actions, which in some sense objectifies them, rather than strengthening their agency. This paper proposes that theories of embodied interaction could help formulating a design vision in which interactive assistive devices augment, rather than replace, human agency and thereby add to a persons’ empowerment in daily life settings. It aims to close the gap between empowerment theory and the opportunities provided by assistive technologies, by showing how embodiment and empowerment theory can be applied in practice in the design of new, interactive assistive devices. Taking a Research-through-Design approach, we conducted a case study of designing to support independently living people with ASD with structuring daily activities. In three iterations we interlaced design action, active involvement and prototype evaluations with future end-users and healthcare professionals, and theoretical reflection. Our co-design sessions revealed the issue of handling daily activities being multidimensional. Not having the ability to self-manage one’s daily life has immense consequences on one’s self-image, and also has major effects on the relationship with professional caregivers. Over the course of the project relevant theoretical principles of both embodiment and empowerment theory together with user-insights, informed our design decisions. This resulted in a system of wireless light units that users can program as a reminder for tasks, but also to record and reflect on their actions. The iterative process helped to gradually refine and reframe our growing understanding of what it concretely means for a technology to empower a person in daily life. Drawing on the case study insights we propose a set of concrete design principles that together form what we call the embodied empowerment design framework. The framework includes four main principles: Enabling ‘reflection-in-action’; making information ‘publicly available’ in order to enable co-reflection and social coupling; enabling the implementation of shared reflections into an ‘endurable-external feedback loop’ embedded in the persons familiar ’lifeworld’; and nudging situated actions with self-created action-affordances. In essence, the framework aims for the self-development of a suitable routine, or ‘situated practice’, by building on a growing shared insight of what works for the person. The framework, we propose, may serve as a starting point for AT designers to create truly empowering interactive products. In a set of follow-up projects involving the participation of persons with ASD, Intellectual Disabilities, Dementia and Acquired Brain Injury, the framework will be applied, evaluated and further refined.

Keywords: assistive technology, design, embodiment, empowerment

Procedia PDF Downloads 278
214 Service Blueprinting: A New Application for Evaluating Service Provision in the Hospice Sector

Authors: L. Sudbury-Riley, P. Hunter-Jones, L. Menzies, M. Pyrah, H. Knight

Abstract:

Just as manufacturing firms aim for zero defects, service providers strive to avoid service failures where customer expectations are not met. However, because services comprise unique human interactions, service failures are almost inevitable. Consequently, firms focus on service recovery strategies to fix problems and retain their customers for the future. Because a hospice offers care to terminally ill patients, it may not get the opportunity to correct a service failure. This situation makes the identification of what hospice users really need and want, and to ascertain perceptions of the hospice’s service delivery from the user’s perspective, even more important than for other service providers. A well-documented and fundamental barrier to improving end-of-life care is a lack of service quality measurement tools that capture the experiences of user’s from their own perspective. In palliative care, many quantitative measures are used and these focus on issues such as how quickly patients are assessed, whether they receive information leaflets, whether a discussion about their emotional needs is documented, and so on. Consequently, quality of service from the user’s perspective is overlooked. The current study was designed to overcome these limitations by adapting service blueprinting - never before used in the hospice sector - in order to undertake a ‘deep-dive’ to examine the impact of hospice services upon different users. Service blueprinting is a customer-focused approach for service innovation and improvement, where the ‘onstage’ visible service user and provider interactions must be supported by the ‘backstage’ employee actions and support processes. The study was conducted in conjunction with East Cheshire Hospice in England. The Hospice provides specialist palliative care for patients with progressive life-limiting illnesses, offering services to patients, carers and families via inpatient and outpatient units. Using service blueprinting to identify every service touchpoint, in-depth qualitative interviews with 38 in-patients, outpatients, visitors and bereaved families enabled a ‘deep-dive’ to uncover perceptions of the whole service experience among these diverse users. Interviews were recorded and transcribed, and thematic analysis of over 104,000 words of data revealed many excellent aspects of Hospice service. Staff frequently exceed people’s expectations. Striking gratifying comparisons to hospitals emerged. The Hospice makes people feel safe. Nevertheless, the technique uncovered many areas for improvement, including serendipity of referrals processes, the need for better communications with external agencies, improvements amid the daunting arrival and admissions process, a desperate need for more depression counselling, clarity of communication pertaining to actual end of life, and shortcomings in systems dealing with bereaved families. The study reveals that the adapted service blueprinting tool has major advantages of alternative quantitative evaluation techniques, including uncovering the complex nature of service user’s experiences in health-care service systems, highlighting more fully the interconnected configurations within the system and making greater sense of the impact of the service upon different service users. Unlike other tools, this in-depth examination reveals areas for improvement, many of which have already been implemented by the Hospice. The technique has potential to improve experiences of palliative and end-of-life care among patients and their families.

Keywords: hospices, end-of-life-care, service blueprinting, service delivery

Procedia PDF Downloads 192
213 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 648
212 Prevalence of Antibiotic-Resistant Bacteria Isolated from Fresh Vegetables Retailed in Eastern Spain

Authors: Miguel García-Ferrús, Yolanda Domínguez, M Angeles Castillo, M Antonia Ferrús, Ana Jiménez-Belenguer

Abstract:

Antibiotic resistance is a growing public health concern worldwide, and it is now regarded as a critical issue within the "One Health" approach that affects human and animal health, agriculture, and environmental waste management. This concept focuses on the interconnected nature of human, animal and environmental health, and WHO highlights zoonotic diseases, food safety, and antimicrobial resistance as three particularly relevant areas for this framework. Fresh vegetables are garnering attention in the food chain due to the presence of pathogens and because they can act as a reservoir for Antibiotic Resistance Bacteria (ARB) and Antibiotic Resistance Genes (ARG). These fresh products are frequently consumed raw, thereby contributing to the spread and transmission of antibiotic resistance. Therefore, the aim of this research was to study the microbiological quality, the prevalence of ARB, and their role in the dissemination of ARG in fresh vegetables intended for human consumption. For this purpose, 102 samples of fresh vegetables (30 lettuce, 30 cabbage, 18 strawberries and 24 spinach) from different retail establishments in Valencia (Spain) have been analyzed to determine their microbiological quality and their role in spreading ARB and ARG. The samples were collected and examined according to standardized methods for total viable bacteria, coliforms, Shiga toxin-producing Escherichia coli (STEC), Listeria monocytogenes and Salmonella spp. Isolation was made in culture media supplemented with antibiotics (cefotaxime and meropenem). A total of 239 strains resistant to beta-lactam antibiotics (Third-Generation Cephalosporins and Carbapenems) were isolated. Thirty Gram-negative isolates were selected and biochemically identified or partial sequencing of 16S rDNA. Their sensitivity to 12 antibiotic discs was determined using the Kirby-Bauer disc diffusion technique to different therapeutic groups. To determine the presence of ARG, PCR assays for the direct sample and selected isolate DNA were performed for main expanded spectrum beta-lactamase (ESBL)-, carbapenemase-encoding genes and plasmid-mediated quinolone resistance genes. From the total samples, 68% (24/24 spinach, 28/30 lettuce and 17/30 cabbage) showed total viable bacteria levels over the accepted standard 10(2)-10(5) cfu/g range; and 48% (24/24 spinach, 19/30 lettuce and 6/30) showed coliforms levels over the accepted standard 10(2)-10(4) cfu/g range. In 9 samples (3/24 spinach, 3/30 lettuce, 3/30 cabbage; 9/102 (9%)) E. coli levels were higher than the standard 10(3) cfu/g limit. Listeria monocytogenes, Salmonella and STEC have not been detected. Six different bacteria species were isolated from samples. Stenotrophomonas maltophilia (64%) was the prevalent species, followed by Acinetobacter pitii (14%) and Burkholderia cepacia (7%). All the isolates were resistant to at least one tested antibiotic, including meropenem (85%) and ceftazidime (46%). Of the total isolates, 86% were multidrug-resistant and 68% were ESBL productors. Results of PCR showed the presence of resistance genes to beta-lactams blaTEM (4%) and blaCMY-2 (4%), to carbapenemes blaOXA-48 (25%), blaVIM (7%), blaIMP (21%) and blaKPC (32%), and to quinolones QnrA (7%), QnrB (11%) and QnrS (18%). Thus, fresh vegetables harboring ARB and ARG constitute a potential risk to consumers. Further studies must be done to detect ARG and how they propagate in non-medical environments.

Keywords: ESBL, β-lactams, resistances, fresh vegetables.

Procedia PDF Downloads 87
211 Accountability of Artificial Intelligence: An Analysis Using Edgar Morin’s Complex Thought

Authors: Sylvie Michel, Sylvie Gerbaix, Marc Bidan

Abstract:

Artificial intelligence (AI) can be held accountable for its detrimental impacts. This question gains heightened relevance given AI's pervasive reach across various domains, magnifying its power and potential. The expanding influence of AI raises fundamental ethical inquiries, primarily centering on biases, responsibility, and transparency. This encompasses discriminatory biases arising from algorithmic criteria or data, accidents attributed to autonomous vehicles or other systems, and the imperative of transparent decision-making. This article aims to stimulate reflection on AI accountability, denoting the necessity to elucidate the effects it generates. Accountability comprises two integral aspects: adherence to legal and ethical standards and the imperative to elucidate the underlying operational rationale. The objective is to initiate a reflection on the obstacles to this "accountability," facing the challenges of the complexity of artificial intelligence's system and its effects. Then, this article proposes to mobilize Edgar Morin's complex thought to encompass and face the challenges of this complexity. The first contribution is to point out the challenges posed by the complexity of A.I., with fractional accountability between a myriad of human and non-human actors, such as software and equipment, which ultimately contribute to the decisions taken and are multiplied in the case of AI. Accountability faces three challenges resulting from the complexity of the ethical issues combined with the complexity of AI. The challenge of the non-neutrality of algorithmic systems as fully ethically non-neutral actors is put forward by a revealing ethics approach that calls for assigning responsibilities to these systems. The challenge of the dilution of responsibility is induced by the multiplicity and distancing between the actors. Thus, a dilution of responsibility is induced by a split in decision-making between developers, who feel they fulfill their duty by strictly respecting the requests they receive, and management, which does not consider itself responsible for technology-related flaws. Accountability is confronted with the challenge of transparency of complex and scalable algorithmic systems, non-human actors self-learning via big data. A second contribution involves leveraging E. Morin's principles, providing a framework to grasp the multifaceted ethical dilemmas and subsequently paving the way for establishing accountability in AI. When addressing the ethical challenge of biases, the "hologrammatic" principle underscores the imperative of acknowledging the non-ethical neutrality of algorithmic systems inherently imbued with the values and biases of their creators and society. The "dialogic" principle advocates for the responsible consideration of ethical dilemmas, encouraging the integration of complementary and contradictory elements in solutions from the very inception of the design phase. Aligning with the principle of organizing recursiveness, akin to the "transparency" of the system, it promotes a systemic analysis to account for the induced effects and guides the incorporation of modifications into the system to rectify deviations and reintroduce modifications into the system to rectify its drifts. In conclusion, this contribution serves as an inception for contemplating the accountability of "artificial intelligence" systems despite the evident ethical implications and potential deviations. Edgar Morin's principles, providing a lens to contemplate this complexity, offer valuable perspectives to address these challenges concerning accountability.

Keywords: accountability, artificial intelligence, complexity, ethics, explainability, transparency, Edgar Morin

Procedia PDF Downloads 63