Search results for: actual exam time usage
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20602

Search results for: actual exam time usage

13282 The Effects of Six Weeks Endurance Training and Aloe Vera on COX-2 and VEGF Levels in Mice with Breast Cancer

Authors: Alireza Barari, Ahmad Abdi

Abstract:

The aim of this study was to determine the effects of the effects of six weeks endurance training and Aloe Vera on cyclooxygenase 2 (COX-2) and VEGF levels in mice with breast cancer. For this purpose, 35 rats were randomly divided into 5 groups: control (healthy), control (cancer), training (cancer), Aloe Vera (cancer) and Aloe Vera + training (cancer). Induction of breast cancer tumors were done in mice by planting method. The training program includes six weeks of swimming training was done in three sessions per week. Training time from 10 minutes on the first day increased to 60 minutes in second week, and by stabilizing this time, the water flow rate was increased from 7 to 15 liters per minute. 300 mg per kg body weight of Aloe Vera extract was injected into the peritoneal. Sampling was done 48 hours after the last exercise session. K-S test to determine the normality of the data and analysis of variance for repeated measures and Tukey test was used to analyze the data. A significant difference in the p<0.05 accepted. The results showed that induction of cancer cells significantly increased levels of COX-2 in aloe group and VEGF in training and Aloe Vera + training groups. The results suggest that swimming exercise and Aloe Vera can reduce levels of COX-2 and VEGF in mice with breast cancer.The results of this study, Induction of cancer cells significantly increased levels of COX-2 and MMP-9 in the control group compared with the cancer control group. The results suggest that Aloe Vera can probably inhibit the cyclooxygenase pathway and thus production of prostaglandin E2 decrease of arachidonic acid.

Keywords: endurance training, aloe vera, COX-2, VEGF

Procedia PDF Downloads 284
13281 Development of Self-Reliant Satellite-Level Propulsion System by Using Hydrogen Peroxide Propellant

Authors: H. J. Liu, Y. A. Chan, C. K. Pai, K. C. Tseng, Y. H. Chen, Y. L. Chan, T. C. Kuo

Abstract:

To satisfy the mission requirement of the FORMOSAT-7 project, NSPO has initialized a self-reliant development on satellite propulsion technology. A trade-off study on different types of on-board propulsion system has been done. A green propellant, high-concentration hydrogen peroxide (H2O2 hereafter), is chosen in this research because it is ITAR-free, nontoxic and easy to produce. As the components designed for either cold gas or hydrazine propulsion system are not suitable for H2O2 propulsion system, the primary objective of the research is to develop the components compatible with H2O2. By cooperating with domestic research institutes and manufacturing vendors, several prototype components, including a diaphragm-type tank, pressure transducer, ball latching valve, and one-Newton thruster with catalyst bed, were manufactured, and the functional tests were performed successfully according to the mission requirements. The requisite environmental tests, including hot firing test, thermal vaccum test, vibration test and compatibility test, are prepared and will be to completed in the near future. To demonstrate the subsystem function, an Air-Bearing Thrust Stand (ABTS) and a real-time Data Acquisition & Control System (DACS) were implemented to assess the performance of the proposed H2O2 propulsion system. By measuring the distance that the thrust stand has traveled in a given time, the thrust force can be derived from the kinematics equation. To validate the feasibility of the approach, it is scheduled to assess the performance of a cold gas (N2) propulsion system prior to the H2O2 propulsion system.

Keywords: FORMOSAT-7, green propellant, Hydrogen peroxide, thruster

Procedia PDF Downloads 422
13280 Factors Contributing to Delayed Diagnosis and Treatment of Breast Cancer and Its Outcome in Jamhoriat Hospital Kabul, Afghanistan

Authors: Ahmad Jawad Fardin

Abstract:

Over 60% of patients with breast cancer in Afghanistan present late with advanced stage III and IV, a major cause for the poor survival rate. The objectives of this study were to identify the contributing factors for the diagnosis and treatment delay and its outcome. This cross-sectional study was conducted on 318 patients with histologically confirmed breast cancer in the oncology department of Jamhoriat hospital, which is the first and only national cancer center in Afghanistan; data were collected from medical records and interviews conducted with women diagnosed with breast cancer, linear regression and logistic regression were used for analysis. Patient delay was defined as the time from first recognition of symptoms until first medical consultation and doctor form first consultation with a health care provider until histological confirmation of breast cancer. The mean age of patients was 49.2+_ 11.5years. The average time for the final diagnosis of breast cancer was 8.5 months; most patients had ductal carcinoma 260.7 (82%). Factors associated with delay were low education level 76% poor socioeconomic and cultural conditions 81% lack of cancer center 73% lack of screening 19%. The stage distribution was as follows stage IV 4 22% stage III 44.4% stage II 29.3% stage I 4.3%. Complex associated factors were identified to delayed the diagnosis of breast cancer and increased adverse outcomes consequently. Raising awareness and education in women, the establishment of cancer centers and providing accessible diagnosis service and screening, training of general practitioners; required to promote early detection, diagnosis and treatment.

Keywords: delayed diagnosis and poor outcome, breast cancer in Afghanistan, poor outcome of delayed breast cancer treatment, breast cancer delayed diagnosis and treatment in Afghanistan

Procedia PDF Downloads 177
13279 Consultation Liasion Psychiatry in a Tertiary Care Hospital

Authors: K. Pankaj, R. K. Chaudhary, B. P. Mishra, S. Kochar

Abstract:

Introduction: Consultation-Liaison psychiatry is a branch of psychiatry that includes clinical service, teaching and research. A consultation-liaison psychiatrist plays a role in having an expert opinion and linking the patients to other medical professionals and the patient’s bio-psycho-social aspects that may be leading to his/her symptoms. Consultation-Liaison psychiatry has been recognised as 'The guardian of the holistic approach to the patient', underlining its pre-eminent role in the management of patients who are admitted in a tertiary care hospital. Aims/ Objectives: The aim of the study was to analyse the utilization of psychiatric services and reasons for referrals in a tertiary care hospital. Materials and Methods: The study was done in a tertiary care hospital. The study included all the cases referred from different Inpatient wards to the psychiatry department for consultation. The study was conducted on 300 patients over a 3 month period. International classification of diseases 10 was used to diagnose the referred cases. Results: The majority of the referral was from the Medical Intensive care unit (22%) followed by general medical wards (18.66%). Majority of the referral was taken for altered sensorium (24.66%), followed by low mood or unexplained medical symptoms (21%). Majority of the referrals had a diagnosis of alcohol withdrawal syndrome (21%) as per International classification of diseases criteria, followed by unipolar Depression and Anxiety disorder (~ 14%), followed by Schizophrenia (5%) and Polysubstance abuse (2.6%). Conclusions: Our study concludes the importance of utilization of consultation-liaison psychiatric services. Also, the study signifies the need for sensitization of our colleagues regarding psychiatric sign and symptoms from time to time and seek psychiatric consult timely to decrease morbidity.

Keywords: consultation-liaison, psychiatry, referral, tertiary care hospital

Procedia PDF Downloads 146
13278 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 558
13277 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane

Authors: Gabrielle Peck, Ryan Hayes

Abstract:

Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.

Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity

Procedia PDF Downloads 71
13276 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization

Authors: Gerald Gwamba

Abstract:

Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.

Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments

Procedia PDF Downloads 128
13275 The Amorphousness of the Exposure Sphere

Authors: Nipun Ansal

Abstract:

People guard their beliefs and opinions with their lives. Beliefs that they’ve formed over a period of time, and can go to any lengths to defy, desist from, resist and negate any outward stimulus that has the potential to shake them. Cognitive dissonance is term used to describe it in theory. And every human being, in order to defend himself from cognitive dissonance applies 4 rings of defense viz. Selective Exposure, Selective Perception, Selective Attention, and Selective Retention. This paper is a discursive analysis on how the onslaught of social media, complete with its intrusive weaponry, has amorphized the external ring of defense: the selective exposure. The stimulus-response model of communication is one of the most inherent model that encompasses communication behaviours of children and elderly, individual and masses, humans and animals alike. The paper deliberates on how information bombardment through the uncontrollable channels of the social media, Facebook and Twitter in particular, have dismantled our outer sphere of exposure, leading users online to a state of constant dissonance, and thus feeding impulsive action-taking. It applies case study method citing an example to corroborate how knowledge generation has given in to the information overload and the effect it has on decision making. With stimulus increasing in number of encounters, opinion formation precedes knowledge because of the increased demand of participation and decrease in time for the information to permeate from the outer sphere of exposure to the sphere of retention, which of course, is through perception and attention. This paper discusses the challenge posed by this fleeting, stimulus rich, peer-dominated media on the traditional models of communication and meaning-generation.

Keywords: communication, discretion, exposure, social media, stimulus

Procedia PDF Downloads 403
13274 The Impact of Information and Communications Technology (ICT)-Enabled Service Adaptation on Quality of Life: Insights from Taiwan

Authors: Chiahsu Yang, Peiling Wu, Ted Ho

Abstract:

From emphasizing economic development to stressing public happiness, the international community mainly hopes to be able to understand whether the quality of life for the public is becoming better. The Better Life Index (BLI) constructed by OECD uses living conditions and quality of life as starting points to cover 11 areas of life and to convey the state of the general public’s well-being. In light of the BLI framework, the Directorate General of Budget, Accounting and Statistics (DGBAS) of the Executive Yuan instituted the Gross National Happiness Index to understand the needs of the general public and to measure the progress of the aforementioned conditions in residents across the island. Whereas living conditions consist of income and wealth, jobs and earnings, and housing conditions, health status, work and life balance, education and skills, social connections, civic engagement and governance, environmental quality, personal security. The ICT area consists of health care, living environment, ICT-enabled communication, transportation, government, education, pleasure, purchasing, job & employment. In the wake of further science and technology development, rapid formation of information societies, and closer integration between lifestyles and information societies, the public’s well-being within information societies has indeed become a noteworthy topic. the Board of Science and Technology of the Executive Yuan use the OECD’s BLI as a reference in the establishment of the Taiwan-specific ICT-Enabled Better Life Index. Using this index, the government plans to examine whether the public’s quality of life is improving as well as measure the public’s satisfaction with current digital quality of life. This understanding will enable the government to gauge the degree of influence and impact that each dimension of digital services has on digital life happiness while also serving as an important reference for promoting digital service development. The content of the ICT Enabled Better Life Index. Information and communications technology (ICT) has been affecting people’s living styles, and further impact people’s quality of life (QoL). Even studies have shown that ICT access and usage have both positive and negative impact on life satisfaction and well-beings, many governments continue to invest in e-government programs to initiate their path to information society. This research is the few attempts to link the e-government benchmark to the subjective well-being perception, and further address the gap between user’s perception and existing hard data assessment, then propose a model to trace measurement results back to the original public policy in order for policy makers to justify their future proposals.

Keywords: information and communications technology, quality of life, satisfaction, well-being

Procedia PDF Downloads 348
13273 Colorimetric Measurement of Dipeptidyl Peptidase IV (DPP IV) Activity via Peptide Capped Gold Nanoparticles

Authors: H. Aldewachi, M. Hines, M. McCulloch, N. Woodroofe, P. Gardiner

Abstract:

DPP-IV is an enzyme whose expression is affected in a variety of diseases, therefore, has been identified as possible diagnostic or prognostic marker for various tumours, immunological, inflammatory, neuroendocrine, and viral diseases. Recently, DPP-IV enzyme has been identified as a novel target for type II diabetes treatment where the enzyme is involved. There is, therefore, a need to develop sensitive and specific methods that can be easily deployed for the screening of the enzyme either as a tool for drug screening or disease marker in biological samples. A variety of assays have been introduced for the determination of DPP-IV enzyme activity using chromogenic and fluorogenic substrates, nevertheless these assays either lack the required sensitivity especially in inhibited enzyme samples or displays low water solubility implying difficulty for use in vivo samples in addition to labour and time-consuming sample preparation. In this study, novel strategies based on exploiting the high extinction coefficient of gold nanoparticles (GNPs) are investigated in order to develop fast, specific and reliable enzymatic assay by investigating synthetic peptide sequences containing a DPP IV cleavage site and coupling them to GNPs. The DPP IV could be detected by colorimetric response of peptide capped GNPs (P-GNPS) that could be monitored by a UV-visible spectrophotometer or even naked eyes, and the detection limit could reach 0.01 unit/ml. The P-GNPs, when subjected to DPP IV, showed excellent selectivity compared to other proteins (thrombin and human serum albumin) , which led to prominent colour change. This provided a simple and effective colorimetric sensor for on-site and real-time detection of DPP IV.

Keywords: gold nanoparticles, synthetic peptides, colorimetric detection, DPP-IV enzyme

Procedia PDF Downloads 300
13272 The Moment of Departure: Redefining Self and Space in Literacy Activism

Authors: Sofie Dewayani, Pratiwi Retnaningdyah

Abstract:

Literacy practice is situated within the identity enactment in a particular time and space. The literacy practices in public places, ranging from city parks, urban slums to city roads are meeting places of discursive practices produced by dynamic interactions, and sometimes contestations, of social powers and capitals. The present paper examines the ways the literacy activists construct their sense of space in attempts to develop possibilities for literacy programs as they are sent to work with marginalized communities far away from their hometowns in Indonesia. In particular, this paper analyzes the activists’ reflections of identity enactment - othering, familiarity, and sense of comfort - as they are trying to make meaning of the communities’ literacy capitals and practices in the process of adapting with the communities. Data collected for this paper were travel diaries - serving as literacy narratives - obtained from a literacy residency program sponsored by the Indonesian Ministry of Education and Culture. The residency program itself involved 30 youths (18 to 30 years old) to work with marginalized communities in literacy activism programs. This paper analyzes the written narratives of four focal participants using Bakhtin’s chronotopes - the configurations of time and space - that figure into the youth’s meaning-making of literacy as well as their exercise of power and identity. Follow-up interviews were added to enrich the analysis. The analysis considers the youth’s ‘moment of departure’ a critical point in their reconstructions of self and space. This paper expands the discussions of literacy discourse and spatiality while lending its supports to literacy activism in highly diverse multicultural settings.

Keywords: chronotopes, discourse, identity, literacy activism

Procedia PDF Downloads 177
13271 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 195
13270 The Nursing Experience for an Intestinal Perforation Elderly with a Temporary Enterostomy

Authors: Hsiu-Chuan Hsueh, Kuei-Feng Shen Jr., Chia-Ling Chao, Hui-Chuan Pan

Abstract:

This article described a 75 years old woman who has suffered from intestinal perforation and accepted surgery with temporary enterostomy, the operation makes her depressed, refused relatives and friend's care, facing low willingness to participate in various activities due to fear of changing body appearance caused by surgery and leave enterostomy. The author collected information through observation talks, physical evaluation, and medical records during the period of care from November 14 to November 30, 2016, we used the four aspects of physiology, psychology, society and spirituality as a whole sexual assessment to establish the nursing problems of patient, included of acute pain, disturbance of body image,coping ineffective individual. For patient care issues, to encouraged case to express their inner feelings and take part in self-care programs through providing good therapeutic interpersonal relationships with their families. However, it provided clear information about the disease and follow-up treatment plan, give compliments in a timely manner, enhanced self-confidence of individual cases and their motivation to participate in self-care of stoma, further face the disease in a positive manner. At the same time, cross-section team care model and individual care measures were developed to enhance the care skills after returning home and at the same time assist the individual in facing the psychological impact caused by stoma. Hope to provide this experience, as a reference for the future care of the disease.

Keywords: enterostomy, intestinal perforation, nursing experience, ostomy

Procedia PDF Downloads 133
13269 Resilience-Vulnerability Interaction in the Context of Disasters and Complexity: Study Case in the Coastal Plain of Gulf of Mexico

Authors: Cesar Vazquez-Gonzalez, Sophie Avila-Foucat, Leonardo Ortiz-Lozano, Patricia Moreno-Casasola, Alejandro Granados-Barba

Abstract:

In the last twenty years, academic and scientific literature has been focused on understanding the processes and factors of coastal social-ecological systems vulnerability and resilience. Some scholars argue that resilience and vulnerability are isolated concepts due to their epistemological origin, while others note the existence of a strong resilience-vulnerability relationship. Here we present an ordinal logistic regression model based on the analytical framework about dynamic resilience-vulnerability interaction along adaptive cycle of complex systems and disasters process phases (during, recovery and learning). In this way, we demonstrate that 1) during the disturbance, absorptive capacity (resilience as a core of attributes) and external response capacity explain the probability of households capitals to diminish the damage, and exposure sets the thresholds about the amount of disturbance that households can absorb, 2) at recovery, absorptive capacity and external response capacity explain the probability of households capitals to recovery faster (resilience as an outcome) from damage, and 3) at learning, adaptive capacity (resilience as a core of attributes) explains the probability of households adaptation measures based on the enhancement of physical capital. As a result, during the disturbance phase, exposure has the greatest weight in the probability of capital’s damage, and households with absorptive and external response capacity elements absorbed the impact of floods in comparison with households without these elements. At the recovery phase, households with absorptive and external response capacity showed a faster recovery on their capital; however, the damage sets the thresholds of recovery time. More importantly, diversity in financial capital increases the probability of recovering other capital, but it becomes a liability so that the probability of recovering the household finances in a longer time increases. At learning-reorganizing phase, adaptation (modifications to the house) increases the probability of having less damage on physical capital; however, it is not very relevant. As conclusion, resilience is an outcome but also core of attributes that interacts with vulnerability along the adaptive cycle and disaster process phases. Absorptive capacity can diminish the damage experienced by floods; however, when exposure overcomes thresholds, both absorptive and external response capacity are not enough. In the same way, absorptive and external response capacity diminish the recovery time of capital, but the damage sets the thresholds in where households are not capable of recovering their capital.

Keywords: absorptive capacity, adaptive capacity, capital, floods, recovery-learning, social-ecological systems

Procedia PDF Downloads 129
13268 Mobile Technology Use by People with Learning Disabilities: A Qualitative Study

Authors: Peter Williams

Abstract:

Mobile digital technology, in the form of smart phones, tablets, laptops and their accompanying functionality/apps etc., is becoming ever more used by people with Learning Disabilities (LD) - for entertainment, to communicate and socialize, and enjoy self-expression. Despite this, there has been very little research into the experiences of such technology by this cohort, it’s role in articulating personal identity and self-advocacy and the barriers encountered in negotiating technology in everyday life. The proposed talk describes research funded by the British Academy addressing these issues. It aims to explore: i) the experiences of people with LD in using mobile technology in their everyday lives – the benefits, in terms of entertainment, self-expression and socialising, and possible greater autonomy; and the barriers, such as accessibility or usability issues, privacy or vulnerability concerns etc. ii) how the technology, and in particular the software/apps and interfaces, can be improved to enable the greater access to entertainment, information, communication and other benefits it can offer. It is also hoped that results will inform parents, carers and other supporters regarding how they can use the technology with their charges. Rather than the project simply following the standard research procedure of gathering and analysing ‘data’ to which individual ‘research subjects’ have no access, people with Learning Disabilities (and their supporters) will help co-produce an accessible, annotated and hyperlinked living e-archive of their experiences. Involving people with LD as informants, contributors and, in effect, co-researchers will facilitate digital inclusion and empowerment. The project is working with approximately 80 adults of all ages who have ‘mild’ learning disabilities (people who are able to read basic texts and write simple sentences). A variety of methods is being used. Small groups of participants have engaged in simple discussions or storytelling about some aspect of technology (such as ‘when my phone saved me’ or ‘my digital photos’ etc.). Some individuals have been ‘interviewed’ at a PC, laptop or with a mobile device etc., and asked to demonstrate their usage and interests. Social media users have shown their Facebook pages, Pinterest uploads or other material – giving them an additional focus they have used to discuss their ‘digital’ lives. During these sessions, participants have recorded (or employed the researcher to record) their observations on to the e-archive. Parents, carers and other supporters are also being interviewed to explore their experiences of using mobile technology with the cohort, including any difficulties they have observed their charges having. The archive is supplemented with these observations. The presentation will outline the methods described above, highlighting some of the special considerations required when working inclusively with people with LD. It will describe some of the preliminary findings and demonstrate the e-archive with a commentary on the pages shown.

Keywords: inclusive research, learning disabilities, methods, technology

Procedia PDF Downloads 220
13267 A Comparison and Discussion of Modern Anaesthetic Techniques in Elective Lower Limb Arthroplasties

Authors: P. T. Collett, M. Kershaw

Abstract:

Introduction: The discussion regarding which method of anesthesia provides better results for lower limb arthroplasty is a continuing debate. Multiple meta-analysis has been performed with no clear consensus. The current recommendation is to use neuraxial anesthesia for lower limb arthroplasty; however, the evidence to support this decision is weak. The Enhanced Recovery After Surgery (ERAS) society has recommended, either technique can be used as part of a multimodal anesthetic regimen. A local study was performed to see if the current anesthetic practice correlates with the current recommendations and to evaluate the efficacy of the different techniques utilized. Method: 90 patients who underwent total hip or total knee replacements at Nevill Hall Hospital between February 2019 to July 2019 were reviewed. Data collected included the anesthetic technique, day one opiate use, pain score, and length of stay. The data was collected from anesthetic charts, and the pain team follows up forms. Analysis: The average of patients undergoing lower limb arthroplasty was 70. Of those 83% (n=75) received a spinal anaesthetic and 17% (n=15) received a general anaesthetic. For patients undergoing knee replacement under general anesthetic the average day, one pain score was 2.29 and 1.94 if a spinal anesthetic was performed. For hip replacements, the scores were 1.87 and 1.8, respectively. There was no statistical significance between these scores. Day 1 opiate usage was significantly higher in knee replacement patients who were given a general anesthetic (45.7mg IV morphine equivalent) vs. those who were operated on under spinal anesthetic (19.7mg). This difference was not noticeable in hip replacement patients. There was no significant difference in length of stay between the two anesthetic techniques. Discussion: There was no significant difference in the day one pain score between the patients who received a general or spinal anesthetic for either knee or hip replacements. The higher pain scores in the knee replacement group overall are consistent with this being a more painful procedure. This is a small patient population, which means any difference between the two groups is unlikely to be representative of a larger population. The pain scale has 4 points, which means it is difficult to identify a significant difference between pain scores. Conclusion: There is currently little standardization between the different anesthetic approaches utilized in Nevill Hall Hospital. This is likely due to the lack of adherence to a standardized anesthetic regimen. In accordance with ERAS recommends a standard anesthetic protocol is a core component. The results of this study and the guidance from the ERAS society will support the implementation of a new health board wide ERAS protocol.

Keywords: anaesthesia, orthopaedics, intensive care, patient centered decision making, treatment escalation

Procedia PDF Downloads 122
13266 Generalized Synchronization in Systems with a Complex Topology of Attractor

Authors: Olga I. Moskalenko, Vladislav A. Khanadeev, Anastasya D. Koloskova, Alexey A. Koronovskii, Anatoly A. Pivovarov

Abstract:

Generalized synchronization is one of the most intricate phenomena in nonlinear science. It can be observed both in systems with a unidirectional and mutual type of coupling including the complex networks. Such a phenomenon has a number of practical applications, for example, for the secure information transmission through the communication channel with a high level of noise. Known methods for the secure information transmission needs in the increase of the privacy of data transmission that arises a question about the observation of such phenomenon in systems with a complex topology of chaotic attractor possessing two or more positive Lyapunov exponents. The present report is devoted to the study of such phenomenon in two unidirectionally and mutually coupled dynamical systems being in chaotic (with one positive Lyapunov exponent) and hyperchaotic (with two or more positive Lyapunov exponents) regimes, respectively. As the systems under study, we have used two mutually coupled modified Lorenz oscillators and two unidirectionally coupled time-delayed generators. We have shown that in both cases the generalized synchronization regime can be detected by means of the calculation of Lyapunov exponents and phase tube approach whereas due to the complex topology of attractor the nearest neighbor method is misleading. Moreover, the auxiliary system approaches being the standard method for the synchronous regime observation, for the mutual type of coupling results in incorrect results. To calculate the Lyapunov exponents in time-delayed systems we have proposed an approach based on the modification of Gram-Schmidt orthogonalization procedure in the context of the time-delayed system. We have studied in detail the mechanisms resulting in the generalized synchronization regime onset paying a great attention to the field where one positive Lyapunov exponent has already been become negative whereas the second one is a positive yet. We have found the intermittency here and studied its characteristics. To detect the laminar phase lengths the method based on a calculation of local Lyapunov exponents has been proposed. The efficiency of the method has been verified using the example of two unidirectionally coupled Rössler systems being in the band chaos regime. We have revealed the main characteristics of intermittency, i.e. the distribution of the laminar phase lengths and dependence of the mean length of the laminar phases on the criticality parameter, for all systems studied in the report. This work has been supported by the Russian President's Council grant for the state support of young Russian scientists (project MK-531.2018.2).

Keywords: complex topology of attractor, generalized synchronization, hyperchaos, Lyapunov exponents

Procedia PDF Downloads 268
13265 The Role of Long-Chain Ionic Surfactants on Extending Drug Delivery from Contact Lenses

Authors: Cesar Torres, Robert Briber, Nam Sun Wang

Abstract:

Eye drops are the most commonly used treatment for short-term and long-term ophthalmic diseases. However, eye drops could deliver only about 5% of the functional ingredients contained in a burst dosage. To address the limitations of eye drops, the use of therapeutic contact lenses has been introduced. Drug-loaded contact lenses provide drugs a longer residence time in the tear film and hence, decrease the potential risk of side effects. Nevertheless, a major limitation of contact lenses as drug delivery devices is that most of the drug absorbed is released within the first few hours. This fact limits their use for extended release. The present study demonstrates the application of long-alkyl chain ionic surfactants on extending drug release kinetics from commercially available silicone hydrogel contact lenses. In vitro release experiments were carried by immersing drug-containing contact lenses in phosphate buffer saline at physiological pH. The drug concentration as a function of time was monitored using ultraviolet-visible spectroscopy. The results of the study demonstrate that release kinetics is dependent on the ionic surfactant weight percent in the contact lenses, and on the length of the hydrophobic alkyl chain of the ionic surfactants. The use of ionic surfactants in contact lenses can extend the delivery of drugs from a few hours to a few weeks, depending on the physicochemical properties of the drugs. Contact lenses embedded with ionic surfactants could be potential biomaterials to be used for extended drug delivery and in the treatment of ophthalmic diseases. However, ocular irritation and toxicity studies would be needed to evaluate the safety of the approach.

Keywords: contact lenses, drug delivery, controlled release, ionic surfactant

Procedia PDF Downloads 138
13264 The Association between Prior Antibiotic Use and Subsequent Risk of Infectious Disease: A Systematic Review

Authors: Umer Malik, David Armstrong, Mark Ashworth, Alex Dregan, Veline L'Esperance, Lucy McDonnell, Mariam Molokhia, Patrick White

Abstract:

Introduction: The microbiota lining epithelial surfaces is thought to play an important role in many human physiological functions including defense against pathogens and modulation of immune response. The microbiota is susceptible to disruption from external influences such as exposure to antibiotic medication. It is thought that antibiotic-induced disruption of the microbiota could predispose to pathogen overgrowth and invasion. We hypothesized that antibiotic use would be associated with increased risk of future infections. We carried out a systematic review of evidence of associations between antibiotic use and subsequent risk of community-acquired infections. Methods: We conducted a review of the literature for observational studies assessing the association between antibiotic use and subsequent community-acquired infection. Eligible studies were published before April 29th, 2016. We searched MEDLINE, EMBASE, and Web of Science and screened titles and abstracts using a predefined search strategy. Infections caused by Clostridium difficile, drug-resistant organisms and fungal organisms were excluded as their association with prior antibiotic use has been examined in previous systematic reviews. Results: Eighteen out of 21,518 retrieved studies met the inclusion criteria. The association between past antibiotic exposure and subsequent increased risk of infection was reported in 16 studies, including one study on Campylobacter jejuni infection (Odds Ratio [OR] 3.3), two on typhoid fever (ORs 5.7 and 12.2), one on Staphylococcus aureus skin infection (OR 2.9), one on invasive pneumococcal disease (OR 1.57), one on recurrent furunculosis (OR 16.6), one on recurrent boils and abscesses (Risk ratio 1.4), one on upper respiratory tract infection (OR 2.3) and urinary tract infection (OR 1.1), one on invasive Haemophilus influenzae type b (Hib) infection (OR 1.51), one on infectious mastitis (OR 5.38), one on meningitis (OR 2.04) and five on Salmonella enteric infection (ORs 1.4, 1.59, 1.9, 2.3 and 3.8). The effect size in three studies on Salmonella enteric infection was of marginal statistical significance. A further two studies on Salmonella infection did not demonstrate a statistically significant association between prior antibiotic exposure and subsequent infection. Conclusion: We have found an association between past antibiotic exposure and subsequent risk of a diverse range of infections in the community setting. Our findings provide evidence to support the hypothesis that prior antibiotic usage may predispose to future infection risk, possibly through antibiotic-induced alteration of the microbiota. The findings add further weight to calls to minimize inappropriate antibiotic prescriptions.

Keywords: antibiotic, infection, risk factor, side effect

Procedia PDF Downloads 223
13263 Reading and Writing of Biscriptal Children with and Without Reading Difficulties in Two Alphabetic Scripts

Authors: Baran Johansson

Abstract:

This PhD dissertation aimed to explore children’s writing and reading in L1 (Persian) and L2 (Swedish). It adds new perspectives to reading and writing studies of bilingual biscriptal children with and without reading and writing difficulties (RWD). The study used standardised tests to examine linguistic and cognitive skills related to word reading and writing fluency in both languages. Furthermore, all participants produced two texts (one descriptive and one narrative) in each language. The writing processes and the writing product of these children were explored using logging methodologies (Eye and Pen) for both languages. Furthermore, this study investigated how two bilingual children with RWD presented themselves through writing across their languages. To my knowledge, studies utilizing standardised tests and logging tools to investigate bilingual children’s word reading and writing fluency across two different alphabetic scripts are scarce. There have been few studies analysing how bilingual children construct meaning in their writing, and none have focused on children who write in two different alphabetic scripts or those with RWD. Therefore, some aspects of the systemic functional linguistics (SFL) perspective were employed to examine how two participants with RWD created meaning in their written texts in each language. The results revealed that children with and without RWD had higher writing fluency in all measures (e.g. text lengths, writing speed) in their L2 compared to their L1. Word reading abilities in both languages were found to influence their writing fluency. The findings also showed that bilingual children without reading difficulties performed 1 standard deviation below the mean when reading words in Persian. However, their reading performance in Swedish aligned with the expected age norms, suggesting greater efficient in reading Swedish than in Persian. Furthermore, the results showed that the level of orthographic depth, consistency between graphemes and phonemes, and orthographic features can probably explain these differences across languages. The analysis of meaning-making indicated that the participants with RWD exhibited varying levels of difficulty, which influenced their knowledge and usage of writing across languages. For example, the participant with poor word recognition (PWR) presented himself similarly across genres, irrespective of the language in which he wrote. He employed the listing technique similarly across his L1 and L2. However, the participant with mixed reading difficulties (MRD) had difficulties with both transcription and text production. He produced spelling errors and frequently paused in both languages. He also struggled with word retrieval and producing coherent texts, consistent with studies of monolingual children with poor comprehension or with developmental language disorder. The results suggest that the mother tongue instruction provided to the participants has not been sufficient for them to become balanced biscriptal readers and writers in both languages. Therefore, increasing the number of hours dedicated to mother tongue instruction and motivating the children to participate in these classes could be potential strategies to address this issue.

Keywords: reading, writing, reading and writing difficulties, bilingual children, biscriptal

Procedia PDF Downloads 65
13262 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Authors: Alexander J. Severinsky

Abstract:

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air

Procedia PDF Downloads 99
13261 Properties of Triadic Concrete Containing Rice Husk Ash and Wood Waste Ash as Partial Cement Replacement

Authors: Abdul Rahman Mohd. Sam, Olukotun Nathaniel, Dunu Williams

Abstract:

Concrete is one of the most popular materials used in construction industry. However, one of the setbacks is that concrete can degrade with time upon exposure to an aggressive environment that leads to decrease in strength. Thus, research works and innovative ways are needed to enhance the strength and durability of concrete. This work tries to look into the potential use of rice husk ash (RHA) and wood waste ash (WWA) as cement replacement material. These are waste materials that may not only enhance the properties of concrete but also can serves as a viable method of disposal of waste for sustainability. In addition, a substantial replacement of Ordinary Portland Cement (OPC) with these pozzolans will mean reduction in CO₂ emissions and high energy requirement associated with the production of OPC. This study is aimed at assessing the properties of triadic concrete produced using RHA and WWA as a partial replacement of cement. The effects of partial replacement of OPC with 10% RHA and 5% WWA on compressive and tensile strength of concrete among other properties were investigated. Concrete was produced with nominal mix of 1:2:4 and 0.55 water-cement ratio, prepared, cured and subjected to compressive and tensile strength test at 3, 7, 14, 28 and 90days. The experimental data demonstrate that concrete containing RHA and WWA produced lighter weight in comparison with OPC sample. Results also show that combination of RHA and WWA help to prolong the initial and final setting time by about 10-30% compared to the control sample. Furthermore, compressive strength was increased by 15-30% with 10% RHA and 5% WWA replacement, respectively above the control, RHA and WWA samples. Tensile strength test at the ages of 3, 7, 14, 28 and 90 days reveals that a replacement of 15% RHA and 5% WWA produced samples with the highest tensile capacity compared to the control samples. Thus, it can be concluded that RHA and WWA can be used as partial cement replacement materials in concrete.

Keywords: concrete, rice husk ash, wood waste ash, ordinary Portland cement, compressive strength, tensile strength

Procedia PDF Downloads 254
13260 Implementing a Screening Tool to Assist with Palliative Care Consultation in Adult Non-ICU Patients

Authors: Cassey Younghans

Abstract:

Background: Current health care trends demonstrate that there is an increasing number of patients being hospitalized with complex comorbidities. These complex needs require advanced therapies, and treatment goals often focus on doing everything possible to prolong life rather than focusing on the individual patient’s quality of life which is the goal of palliative care efforts. Patients benefit from palliative care in the early stages of the illness rather than after the disease progressed or the state of acuity has advanced. The clinical problem identified was that palliative care was not being implemented early enough in the disease process with patients who had complex medical conditions and who would benefit from the philosophy and skills of palliative care professionals. Purpose: The purpose of this quality improvement study was to increase the number of palliative care screenings and consults completed on adults after being admitted to one Non-ICU and Non-COVID hospital unit. Methods: A retrospective chart review assessing for possible missed opportunities to introduce palliation was performed for patients with six primary diagnoses, including heart failure, liver failure, end stage renal disease, chronic obstructive pulmonary disease, cerebrovascular accident, and cancer in a population of adults over the ago of 19 on one medical-surgical unit over a three-month period prior to the intervention. An educational session with the nurses on the benefits of palliative care was conducted by the researcher, and a screening tool was implemented. The expected outcome was to have an increase in early palliative care consultation with patients with complex comorbid conditions and a decrease in missed opportunities for the implementation of palliative care. Another retrospective chart review was completed following completion of the three month piloting of the tool. Results: During the retrospective chart review, 46 patients were admitted to the medical-surgical floor with the primary diagnoses identified in the inclusion criteria. Six patients had palliative care consults completed during that time. Twenty-two palliative care screening tools were completed during the intervention period. Of those, 15 of the patients scored a 7 or higher, suggesting that a palliative care consultation was warranted. The final retrospective chart review identified that 4 palliative consults were implemented during that time of the 31 patients who were admitted over the three month time frame. Conclusion: Educating nurses and implementing a palliative care screening upon admission can be of great value in providing early identification of patients who might benefit from palliative care. Recommendations – It is recommended that this screening tool should be used to help identify the patents of whom would benefit from a palliative care consult, and nurses would be able to initiated a palliative care consultation themselves.

Keywords: palliative care, screening, early, palliative care consult

Procedia PDF Downloads 149
13259 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 49
13258 Application of Fatty Acid Salts for Antimicrobial Agents in Koji-Muro

Authors: Aya Tanaka, Mariko Era, Shiho Sakai, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Objectives: Aspergillus niger and Aspergillus oryzae are used as koji fungi in the spot of the brewing. Since koji-muro (room for making koji) was a low level of airtightness, microbial contamination has long been a concern to the alcoholic beverage production. Therefore, we focused on the fatty acid salt which is the main component of soap. Fatty acid salts have been reported to show some antibacterial and antifungal activity. So this study examined antimicrobial activities against Aspergillus and Bacillus spp. This study aimed to find the effectiveness of the fatty acid salt in koji-muro as antimicrobial agents. Materials & Methods: A. niger NBRC 31628, A. oryzae NBRC 5238, A. oryzae (Akita Konno store) and Bacillus subtilis NBRC 3335 were chosen as tested. Nine fatty acid salts including potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K) and linolenate (C18:3K) at 350 mM and pH 10.5 were used as antimicrobial activity. FASs and spore suspension were prepared in plastic tubes. The spore suspension of each fungus (3.0×104 spores/mL) or the bacterial suspension (3.0×105 CFU/mL) was mixed with each of the fatty acid salts (final concentration of 175 mM). The mixtures were incubated at 25 ℃. Samples were counted at 0, 10, 60, and 180 min by plating (100 µL) on potato dextrose agar. Fungal and bacterial colonies were counted after incubation for 1 or 2 days at 30 ℃. The MIC (minimum inhibitory concentration) is defined as the lowest concentration of drug sufficient for inhibiting visible growth of spore after 10 min of incubation. MICs against fungi and bacteria were determined using the two-fold dilution method. Each fatty acid salt was separately inoculated with 400 µL of Aspergillus spp. or B. subtilis NBRC 3335 at 3.0 × 104 spores/mL or 3.0 × 105 CFU/mL. Results: No obvious change was observed in tested fatty acid salts against A. niger and A. oryzae. However, C12K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. Thus, C12K suppressed 99.999 % of bacterial growth. Besides, C10K was the antibacterial effect of 5 log-unit incubated time for 180 min against B. subtilis. C18:1K, C18:2K and C18:3K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. However, compared to saturated fatty acid salts to unsaturated fatty acid salts, saturated fatty acid salts are lower cost. These results suggest C12K has potential in the field of koji-muro. It is necessary to evaluate the antimicrobial activity against other fungi and bacteria, in the future.

Keywords: Aspergillus, antimicrobial, fatty acid salts, koji-muro

Procedia PDF Downloads 547
13257 Cybersecurity Assessment of Decentralized Autonomous Organizations in Smart Cities

Authors: Claire Biasco, Thaier Hayajneh

Abstract:

A smart city is the integration of digital technologies in urban environments to enhance the quality of life. Smart cities capture real-time information from devices, sensors, and network data to analyze and improve city functions such as traffic analysis, public safety, and environmental impacts. Current smart cities face controversy due to their reliance on real-time data tracking and surveillance. Internet of Things (IoT) devices and blockchain technology are converging to reshape smart city infrastructure away from its centralized model. Connecting IoT data to blockchain applications would create a peer-to-peer, decentralized model. Furthermore, blockchain technology powers the ability for IoT device data to shift from the ownership and control of centralized entities to individuals or communities with Decentralized Autonomous Organizations (DAOs). In the context of smart cities, DAOs can govern cyber-physical systems to have a greater influence over how urban services are being provided. This paper will explore how the core components of a smart city now apply to DAOs. We will also analyze different definitions of DAOs to determine their most important aspects in relation to smart cities. Both categorizations will provide a solid foundation to conduct a cybersecurity assessment of DAOs in smart cities. It will identify the benefits and risks of adopting DAOs as they currently operate. The paper will then provide several mitigation methods to combat cybersecurity risks of DAO integrations. Finally, we will give several insights into what challenges will be faced by DAO and blockchain spaces in the coming years before achieving a higher level of maturity.

Keywords: blockchain, IoT, smart city, DAO

Procedia PDF Downloads 111
13256 Quince Seed Mucilage (QSD)/ Multiwall Carbonano Tube Hybrid Hydrogels as Novel Controlled Drug Delivery Systems

Authors: Raouf Alizadeh, Kadijeh Hemmati

Abstract:

The aim of this study is to synthesize several series of hydrogels from combination of a natural based polymer (Quince seed mucilage QSD), a synthetic copolymer contained methoxy poly ethylene glycol -polycaprolactone (mPEG-PCL) in the presence of different amount of multi-walled carbon nanotube (f-MWNT). Mono epoxide functionalized mPEG (mP EG-EP) was synthesized and reacted with sodium azide in the presence of NH4Cl to afford mPEG- N3(-OH). Then ring opening polymerization (ROP) of ε–caprolactone (CL) in the presence of mPEG- N3(-OH) as initiator and Sn(Oct)2 as catalyst led to preparation of mPEG-PCL- N3(-OH ) which was grafted onto propagylated f-MWNT by the click reaction to obtain mPEG-PCL- f-MWNT (-OH ). In the presence of mPEG- N3(-Br) and mixture of NHS/DCC/ QSD, hybrid hydrogels were successfully synthesized. The copolymers and hydrogels were characterized using different techniques such as, scanning electron microscope (SEM) and thermogravimetric analysis (TGA). The gel content of hydrogels showed dependence on the weight ratio of QSD:mPEG-PCL:f-MWNT. The swelling behavior of the prepared hydrogels was also studied under variation of pH, immersion time, and temperature. According to the results, the swelling behavior of the prepared hydrogels showed significant dependence in the gel content, pH, immersion time and temperature. The highest swelling was observed at room temperature, in 60 min and at pH 8. The loading and in-vitro release of quercetin as a model drug were investigated at pH of 2.2 and 7.4, and the results showed that release rate at pH 7.4 was faster than that at pH 2.2. The total loading and release showed dependence on the network structure of hydrogels and were in the range of 65- 91%. In addition, the cytotoxicity and release kinetics of the prepared hydrogels were also investigated.

Keywords: antioxidant, drug delivery, Quince Seed Mucilage(QSD), swelling behavior

Procedia PDF Downloads 313
13255 Synthesis, Characterization and Catecholase Study of Novel Bidentate Schiff Base Derived from Dehydroacetic Acid

Authors: Salima Tabti, Chaima Maouche, Tinhinene Louaileche, Amel Djedouani, Ismail Warad

Abstract:

Novel Schiff base ligand HL has been synthesized by condensation of aromatic amine and DHA. It was characterized by UV-Vis, FT-IR, SM, NMR (1H, 13C) and also by single-crystal X-ray diffraction. The crystal structure shows that compound crystallized in a triclinic system in P-1 space group and with a two unit per cell (Z = 2).The asymmetric unit, contains one independent molecules, the conformation is determined by an intermolecular N-H…O hydrogen bond with an S(6) ring motif. The molecule have an (E) conformation about the C=N bond. The dihedral angles between the phenyl and pyran ring planes is 89.37 (1), the two plans are approximately perpendicular. The catecholase activity of is situ copper complexes of this ligand has been investigated against catechol. The progress of the oxidation reactions was closely monitored over time following the strong peak of catechol using UV-Vis. Oxidation rates were determined from the initial slope of absorbance. time plots, then analyzed by Michaelis-Menten equations. Catechol oxidation reactions were realized using different concentrations of copper acetate and ligand (L/Cu: 1/1, 1/2, 2/1). The results show that all complexes were able to catalyze the oxidation of catechol. Acetate complexes have the highest activity. Catalysis is a branch of chemical kinetics that, more generally, studies the influence of all physical or chemical factors determining reaction rates. It solves a lot of problems in the chemistry reaction process, especially for a green, economic and less polluting chemistry. For this reason, the search for new catalysts for known organic reactions, occupies a very advanced place in the themes proposed by the chemists.

Keywords: dehydroacetic acid, catechol, copper, catecholase activity, x-ray

Procedia PDF Downloads 101
13254 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 322
13253 Groupthink: The Dark Side of Team Cohesion

Authors: Farhad Eizakshiri

Abstract:

The potential for groupthink to explain the issues contributing to deterioration of decision-making ability within the unitary team and so to cause poor outcomes attracted a great deal of attention from a variety of disciplines, including psychology, social and organizational studies, political science, and others. Yet what remains unclear is how and why the team members’ strivings for unanimity and cohesion override their motivation to realistically appraise alternative courses of action. In this paper, the findings of a sequential explanatory mixed-methods research containing an experiment with thirty groups of three persons each and interviews with all experimental groups to investigate this issue is reported. The experiment sought to examine how individuals aggregate their views in order to reach a consensual group decision concerning the completion time of a task. The results indicated that groups made better estimates when they had no interaction between members in comparison with the situation that groups collectively agreed on time estimates. To understand the reasons, the qualitative data and informal observations collected during the task were analyzed through conversation analysis, thus leading to four reasons that caused teams to neglect divergent viewpoints and reduce the number of ideas being considered. Reasons found were the concurrence-seeking tendency, pressure on dissenters, self-censorship, and the illusion of invulnerability. It is suggested that understanding the dynamics behind the aforementioned reasons of groupthink will help project teams to avoid making premature group decisions by enhancing careful evaluation of available information and analysis of available decision alternatives and choices.

Keywords: groupthink, group decision, cohesiveness, project teams, mixed-methods research

Procedia PDF Downloads 394